Automatic Estimation of Intelligibility Measure for Consonants in Speech

Ali Abavisani, Mark Hasegawa-Johnson


In this article, we provide a model to estimate a real-valued measure of the intelligibility of individual speech segments. We trained regression models based on Convolutional Neural Networks (CNN) for stop consonants /p,t,k,b,d,ɡ/ associated with vowel /ɑ/, to estimate the corresponding Signal to Noise Ratio (SNR) at which the Consonant-Vowel (CV) sound becomes intelligible for Normal Hearing (NH) ears. The intelligibility measure for each sound is called SNR90, and is defined to be the SNR level at which human participants are able to recognize the consonant at least 90% correctly, on average, as determined in prior experiments with NH subjects. Performance of the CNN is compared to a baseline prediction based on automatic speech recognition (ASR), specifically, a constant offset subtracted from the SNR at which the ASR becomes capable of correctly labeling the consonant. Compared to baseline, our models were able to accurately estimate the SNR90 intelligibility measure with less than 2 [dB2] Mean Squared Error (MSE) on average, while the baseline ASR-defined measure computes SNR90 with a variance of 5.2 to 26.6 [dB2], depending on the consonant.


 DOI: 10.21437/Interspeech.2020-2121

Cite as: Abavisani, A., Hasegawa-Johnson, M. (2020) Automatic Estimation of Intelligibility Measure for Consonants in Speech. Proc. Interspeech 2020, 1161-1165, DOI: 10.21437/Interspeech.2020-2121.


@inproceedings{Abavisani2020,
  author={Ali Abavisani and Mark Hasegawa-Johnson},
  title={{Automatic Estimation of Intelligibility Measure for Consonants in Speech}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1161--1165},
  doi={10.21437/Interspeech.2020-2121},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2121}
}