Automated English Proficiency Scoring of Unconstrained Speech Using Prosodic Features

Okim Kang, David Johnson


This paper evaluates the performance of 17 machine learning classifiers in automatically scoring the English proficiency of unconstrained speech. Each classifier was tested with different groups of features drawn from a master set of prosodic measures founded in Brazil’s (1997) model. The prosodic measures were calculated from the output of an ASR that recognizes phones instead of words and other software designed to detect the elements of Brazil’s prosody model. The performance of the best classifier was 0.68 (p < 0.01) in terms of the correlation between the computer’s calculated proficiency ratings and those scored by humans. Using only prosodic features, this correlation is in the range of other similar computer programs for automatically scoring the proficiency of unconstrained speech.


 DOI: 10.21437/SpeechProsody.2018-125

Cite as: Kang, O., Johnson, D. (2018) Automated English Proficiency Scoring of Unconstrained Speech Using Prosodic Features. Proc. 9th International Conference on Speech Prosody 2018, 617-620, DOI: 10.21437/SpeechProsody.2018-125.


@inproceedings{Kang2018,
  author={Okim Kang and David Johnson},
  title={Automated English Proficiency Scoring of Unconstrained Speech Using Prosodic Features},
  year=2018,
  booktitle={Proc. 9th International Conference on Speech Prosody 2018},
  pages={617--620},
  doi={10.21437/SpeechProsody.2018-125},
  url={http://dx.doi.org/10.21437/SpeechProsody.2018-125}
}