Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces

Frank Zhang, Yongqiang Wang, Xiaohui Zhang, Chunxi Liu, Yatharth Saraf, Geoffrey Zweig

In this work, we first show that on the widely used LibriSpeech benchmark, our transformer-based context-dependent connectionist temporal classification (CTC) system produces state-of-the-art results. We then show that using wordpieces as modeling units combined with CTC training, we can greatly simplify the engineering pipeline compared to conventional frame-based cross-entropy training by excluding all the GMM bootstrapping, decision tree building and force alignment steps, while still achieving very competitive word-error-rate. Additionally, using wordpieces as modeling units can significantly improve runtime efficiency since we can use larger stride without losing accuracy. We further confirm these findings on two internal VideoASR datasets: German, which is similar to English as a fusional language, and Turkish, which is an agglutinative language.

 DOI: 10.21437/Interspeech.2020-1995

Cite as: Zhang, F., Wang, Y., Zhang, X., Liu, C., Saraf, Y., Zweig, G. (2020) Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces. Proc. Interspeech 2020, 976-980, DOI: 10.21437/Interspeech.2020-1995.

  author={Frank Zhang and Yongqiang Wang and Xiaohui Zhang and Chunxi Liu and Yatharth Saraf and Geoffrey Zweig},
  title={{Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces}},
  booktitle={Proc. Interspeech 2020},