Developing RNN-T Models Surpassing High-Performance Hybrid Models with Customization Capability

Jinyu Li, Rui Zhao, Zhong Meng, Yanqing Liu, Wenning Wei, Sarangarajan Parthasarathy, Vadim Mazalov, Zhenghao Wang, Lei He, Sheng Zhao, Yifan Gong


Because of its streaming nature, recurrent neural network transducer (RNN-T) is a very promising end-to-end (E2E) model that may replace the popular hybrid model for automatic speech recognition. In this paper, we describe our recent development of RNN-T models with reduced GPU memory consumption during training, better initialization strategy, and advanced encoder modeling with future lookahead. When trained with Microsoft’s 65 thousand hours of anonymized training data, the developed RNN-T model surpasses a very well trained hybrid model with both better recognition accuracy and lower latency. We further study how to customize RNN-T models to a new domain, which is important for deploying E2E models to practical scenarios. By comparing several methods leveraging text-only data in the new domain, we found that updating RNN-T’s prediction and joint networks using text-to-speech generated from domain-specific text is the most effective.


 DOI: 10.21437/Interspeech.2020-3016

Cite as: Li, J., Zhao, R., Meng, Z., Liu, Y., Wei, W., Parthasarathy, S., Mazalov, V., Wang, Z., He, L., Zhao, S., Gong, Y. (2020) Developing RNN-T Models Surpassing High-Performance Hybrid Models with Customization Capability. Proc. Interspeech 2020, 3590-3594, DOI: 10.21437/Interspeech.2020-3016.


@inproceedings{Li2020,
  author={Jinyu Li and Rui Zhao and Zhong Meng and Yanqing Liu and Wenning Wei and Sarangarajan Parthasarathy and Vadim Mazalov and Zhenghao Wang and Lei He and Sheng Zhao and Yifan Gong},
  title={{Developing RNN-T Models Surpassing High-Performance Hybrid Models with Customization Capability}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3590--3594},
  doi={10.21437/Interspeech.2020-3016},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3016}
}