UPDF AI

Transfer learning via distributed brain recordings enables reliable speech decoding

Aditya Singh,Tessy Thomas,3 Authors,Nitin Tandon

2025 · DOI: 10.1038/s41467-025-63825-0
Nature Communications · 0 Citations

TLDR

The group-derived decoder significantly outperformed models trained on individual data alone, enabling decoding robustness despite variable coverage and activation, and highlights a pathway toward generalizable neural prostheses for speech and language disorders by leveraging large-scale intracranial datasets with distributed spatial sampling and shared task demands.

Abstract

Speech brain-computer interfaces (BCIs) combine neural recordings with large language models to achieve real-time intelligible speech. However, these decoders rely on dense, intact cortical coverage and are challenging to scale across individuals with heterogeneous brain organization. To derive scalable transfer learning strategies for neural speech decoding, we used minimally invasive stereo-electroencephalography recordings in a large cohort performing a demanding speech motor task. A sequence-to-sequence model enabled decoding of variable-length phonemic sequences prior to and during articulation. This enabled development of a cross-subject transfer learning framework to isolate shared latent manifolds while enabling individual model initialization. The group-derived decoder significantly outperformed models trained on individual data alone, enabling decoding robustness despite variable coverage and activation. These results highlight a pathway toward generalizable neural prostheses for speech and language disorders by leveraging large-scale intracranial datasets with distributed spatial sampling and shared task demands.

Cited Papers
Citing Papers