An Effective End-to-End Modeling Approach for Mispronunciation Detection

Tien-Hong Lo, Shi-Yan Weng, Hsiu-Jui Chang, Berlin Chen


Recently, end-to-end (E2E) automatic speech recognition (ASR) systems have garnered tremendous attention because of their great success and unified modeling paradigms in comparison to conventional hybrid DNN-HMM ASR systems. Despite the widespread adoption of E2E modeling frameworks on ASR, there still is a dearth of work on investigating the E2E frameworks for use in computer-assisted pronunciation learning (CAPT), particularly for mispronunciation detection (MD). In response, we first present a novel use of hybrid CTC-Attention approach to the MD task, taking advantage of the strengths of both CTC and the attention-based model meanwhile getting around the need for phone-level forced-alignment. Second, we perform input augmentation with text prompt information to make the resulting E2E model more tailored for the MD task. On the other hand, we adopt two MD decision methods so as to better cooperate with the proposed framework: 1) decision-making based on a recognition confidence measure or 2) simply based on speech recognition results. A series of Mandarin MD experiments demonstrate that our approach not only simplifies the processing pipeline of existing hybrid DNN-HMM systems but also brings about systematic and substantial performance improvements. Furthermore, input augmentation with text prompts seems to hold excellent promise for the E2E-based MD approach.


 DOI: 10.21437/Interspeech.2020-1605

Cite as: Lo, T., Weng, S., Chang, H., Chen, B. (2020) An Effective End-to-End Modeling Approach for Mispronunciation Detection. Proc. Interspeech 2020, 3027-3031, DOI: 10.21437/Interspeech.2020-1605.


@inproceedings{Lo2020,
  author={Tien-Hong Lo and Shi-Yan Weng and Hsiu-Jui Chang and Berlin Chen},
  title={{An Effective End-to-End Modeling Approach for Mispronunciation Detection}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3027--3031},
  doi={10.21437/Interspeech.2020-1605},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1605}
}