Improved Training Strategies for End-to-End Speech Recognition in Digital Voice Assistants

Hitesh Tulsiani, Ashtosh Sapru, Harish Arsikere, Surabhi Punjabi, Sri Garimella


The speech recognition training data corresponding to digital voice assistants is dominated by wake-words. Training end-to-end (E2E) speech recognition models without careful attention to such data results in sub-optimal performance as models prioritize learning wake-words. To address this problem, we propose a novel discriminative initialization strategy by introducing a regularization term to penalize model for incorrectly hallucinating wake-words in early phases of training. We also explore other training strategies such as multi-task learning with listen-attend-spell (LAS), label smoothing via probabilistic modelling of silence and use of multiple pronunciations, and show how they can be combined with the proposed initialization technique. In addition, we show the connection between cost function of proposed discriminative initialization technique and minimum word error rate (MWER) criterion. We evaluate our methods on two E2E ASR systems, a phone-based system and a word-piece based system, trained on 6500 hours of Alexa’s Indian English speech corpus. We show that proposed techniques yield 20% word error rate reductions for phone based system and 6% for word-piece based system compared to corresponding baselines trained on the same data.


 DOI: 10.21437/Interspeech.2020-2036

Cite as: Tulsiani, H., Sapru, A., Arsikere, H., Punjabi, S., Garimella, S. (2020) Improved Training Strategies for End-to-End Speech Recognition in Digital Voice Assistants. Proc. Interspeech 2020, 2792-2796, DOI: 10.21437/Interspeech.2020-2036.


@inproceedings{Tulsiani2020,
  author={Hitesh Tulsiani and Ashtosh Sapru and Harish Arsikere and Surabhi Punjabi and Sri Garimella},
  title={{Improved Training Strategies for End-to-End Speech Recognition in Digital Voice Assistants}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2792--2796},
  doi={10.21437/Interspeech.2020-2036},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2036}
}