Towards a Speaker Independent Speech-BCI Using Speaker Adaptation

Debadatta Dash, Alan Wisler, Paul Ferrari, Jun Wang

Neurodegenerative diseases such as amyotrophic lateral sclerosis (ALS) can cause locked-in-syndrome (fully paralyzed but aware). Brain-computer interface (BCI) may be the only option to restore their communication. Current BCIs typically use visual or attention correlates in neural activities to select letters randomly displayed on a screen, which are extremely slow (a few words per minute). Speech-BCIs, which aim to convert the brain activity patterns to speech (neural speech decoding), hold the potential to enable faster communication. Although a few recent studies have shown the potential of neural speech decoding, those are focused on speaker-dependent models. In this study, we investigated speaker-independent neural speech decoding of five continuous phrases from Magnetoencephalography (MEG) signals while 8 subjects produced speech covertly (imagination) or overtly (articulation). We have used both supervised and unsupervised speaker adaptation strategies for implementing a speaker independent model. Experimental results demonstrated that the proposed adaptation-based speaker-independent model has significantly improved decoding performance. To our knowledge, this is the first demonstration of the possibility of speaker-independent neural speech decoding.

 DOI: 10.21437/Interspeech.2019-3109

Cite as: Dash, D., Wisler, A., Ferrari, P., Wang, J. (2019) Towards a Speaker Independent Speech-BCI Using Speaker Adaptation. Proc. Interspeech 2019, 864-868, DOI: 10.21437/Interspeech.2019-3109.

  author={Debadatta Dash and Alan Wisler and Paul Ferrari and Jun Wang},
  title={{Towards a Speaker Independent Speech-BCI Using Speaker Adaptation}},
  booktitle={Proc. Interspeech 2019},