Analyzing the Quality and Stability of a Streaming End-to-End On-Device Speech Recognizer

Yuan Shangguan, Kate Knister, Yanzhang He, Ian McGraw, Fran├žoise Beaufays


The demand for fast and accurate incremental speech recognition increases as the applications of automatic speech recognition (ASR) proliferate. Incremental speech recognizers output chunks of partially recognized words while the user is still talking. Partial results can be revised before the ASR finalizes its hypothesis, causing instability issues. We analyze the quality and stability of on-device streaming end-to-end (E2E) ASR models. We first introduce a novel set of metrics that quantify the instability at word and segment levels. We study the impact of several model training techniques that improve E2E model qualities but degrade model stability. We categorize the causes of instability and explore various solutions to mitigate them in a streaming E2E ASR system.


 DOI: 10.21437/Interspeech.2020-1194

Cite as: Shangguan, Y., Knister, K., He, Y., McGraw, I., Beaufays, F. (2020) Analyzing the Quality and Stability of a Streaming End-to-End On-Device Speech Recognizer. Proc. Interspeech 2020, 591-595, DOI: 10.21437/Interspeech.2020-1194.


@inproceedings{Shangguan2020,
  author={Yuan Shangguan and Kate Knister and Yanzhang He and Ian McGraw and Fran├žoise Beaufays},
  title={{Analyzing the Quality and Stability of a Streaming End-to-End On-Device Speech Recognizer}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={591--595},
  doi={10.21437/Interspeech.2020-1194},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1194}
}