Automatic Discrimination of Apraxia of Speech and Dysarthria Using a Minimalistic Set of Handcrafted Features

Ina Kodrasi, Michaela Pernon, Marina Laganaro, Hervé Bourlard


To assist clinicians in the differential diagnosis and treatment of motor speech disorders, it is imperative to establish objective tools which can reliably characterize different subtypes of disorders such as apraxia of speech (AoS) and dysarthria. Objective tools in the context of speech disorders typically rely on thousands of acoustic features, which raises the risk of difficulties in the interpretation of the underlying mechanisms, over-adaptation to training data, and weak generalization capabilities to test data. Seeking to use a small number of acoustic features and motivated by the clinical-perceptual signs used for the differential diagnosis of AoS and dysarthria, we propose to characterize differences between AoS and dysarthria using only six handcrafted acoustic features, with three features reflecting segmental distortions, two features reflecting loudness and hypernasality, and one feature reflecting syllabification. These three different sets of features are used to separately train three classifiers. At test time, the decisions of the three classifiers are combined through a simple majority voting scheme. Preliminary results show that the proposed approach achieves a discrimination accuracy of 90%, outperforming using state-of-the-art features such as openSMILE which yield a discrimination accuracy of 65%.


 DOI: 10.21437/Interspeech.2020-2253

Cite as: Kodrasi, I., Pernon, M., Laganaro, M., Bourlard, H. (2020) Automatic Discrimination of Apraxia of Speech and Dysarthria Using a Minimalistic Set of Handcrafted Features. Proc. Interspeech 2020, 4991-4995, DOI: 10.21437/Interspeech.2020-2253.


@inproceedings{Kodrasi2020,
  author={Ina Kodrasi and Michaela Pernon and Marina Laganaro and Hervé Bourlard},
  title={{Automatic Discrimination of Apraxia of Speech and Dysarthria Using a Minimalistic Set of Handcrafted Features}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4991--4995},
  doi={10.21437/Interspeech.2020-2253},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2253}
}