Decoding Imagined, Heard, and Spoken Speech: Classification and Regression of EEG Using a 14-Channel Dry-Contact Mobile Headset

Jonathan Clayton, Scott Wellington, Cassia Valentini-Botinhao, Oliver Watts


We investigate the use of a 14-channel, mobile EEG device in the decoding of heard, imagined, and articulated English phones from brainwave data. To this end we introduce a dataset that fills a current gap in the range of available open-access EEG datasets for speech processing with lightweight, affordable EEG devices made for the consumer market. We investigate the effectiveness of two classification models and a regression model for reconstructing spectral features of the original speech signal. We report that our classification performance is almost on a par with similar findings that use EEG data collected with research-grade devices. We conclude that commercial-grade devices can be used as speech-decoding BCIs with minimal signal processing.


 DOI: 10.21437/Interspeech.2020-2745

Cite as: Clayton, J., Wellington, S., Valentini-Botinhao, C., Watts, O. (2020) Decoding Imagined, Heard, and Spoken Speech: Classification and Regression of EEG Using a 14-Channel Dry-Contact Mobile Headset. Proc. Interspeech 2020, 4886-4890, DOI: 10.21437/Interspeech.2020-2745.


@inproceedings{Clayton2020,
  author={Jonathan Clayton and Scott Wellington and Cassia Valentini-Botinhao and Oliver Watts},
  title={{Decoding Imagined, Heard, and Spoken Speech: Classification and Regression of EEG Using a 14-Channel Dry-Contact Mobile Headset}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4886--4890},
  doi={10.21437/Interspeech.2020-2745},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2745}
}