Recognising Emotions in Dysarthric Speech Using Typical Speech Data

Lubna Alhinti, Stuart Cunningham, Heidi Christensen


Effective communication relies on the comprehension of both verbal and nonverbal information. People with dysarthria may lose their ability to produce intelligible and audible speech sounds which in time may affect their way of conveying emotions, that are mostly expressed using nonverbal signals. Recent research shows some promise on automatically recognising the verbal part of dysarthric speech. However, this is the first study that investigates the ability to automatically recognise the nonverbal part. A parallel database of dysarthric and typical emotional speech is collected, and approaches to discriminating between emotions using models trained on either dysarthric (speaker dependent, matched) or typical (speaker independent, unmatched) speech are investigated for four speakers with dysarthria caused by cerebral palsy and Parkinson’s disease. Promising results are achieved in both scenarios using SVM classifiers, opening new doors to improved, more expressive voice input communication aids.


 DOI: 10.21437/Interspeech.2020-1825

Cite as: Alhinti, L., Cunningham, S., Christensen, H. (2020) Recognising Emotions in Dysarthric Speech Using Typical Speech Data. Proc. Interspeech 2020, 4821-4825, DOI: 10.21437/Interspeech.2020-1825.


@inproceedings{Alhinti2020,
  author={Lubna Alhinti and Stuart Cunningham and Heidi Christensen},
  title={{Recognising Emotions in Dysarthric Speech Using Typical Speech Data}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4821--4825},
  doi={10.21437/Interspeech.2020-1825},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1825}
}