Improving Tail Performance of a Deliberation E2E ASR Model Using a Large Text Corpus

Cal Peyser, Sepand Mavandadi, Tara N. Sainath, James Apfel, Ruoming Pang, Shankar Kumar


End-to-end (E2E) automatic speech recognition (ASR) systems lack the distinct language model (LM) component that characterizes traditional speech systems. While this simplifies the model architecture, it complicates the task of incorporating text-only data into training, which is important to the recognition of tail words that do not occur often in audio-text pairs. While shallow fusion has been proposed as a method for incorporating a pre-trained LM into an E2E model at inference time, it has not yet been explored for very large text corpora, and it has been shown to be very sensitive to hyperparameter settings in the beam search. In this work, we apply shallow fusion to incorporate a very large text corpus into a state-of-the-art E2E ASR model. We explore the impact of model size and show that intelligent pruning of the training set can be more effective than increasing the parameter count. Additionally, we show that incorporating the LM in minimum word error rate (MWER) fine tuning makes shallow fusion far less dependent on optimal hyperparameter settings, reducing the difficulty of that tuning problem.


 DOI: 10.21437/Interspeech.2020-1465

Cite as: Peyser, C., Mavandadi, S., Sainath, T.N., Apfel, J., Pang, R., Kumar, S. (2020) Improving Tail Performance of a Deliberation E2E ASR Model Using a Large Text Corpus. Proc. Interspeech 2020, 4921-4925, DOI: 10.21437/Interspeech.2020-1465.


@inproceedings{Peyser2020,
  author={Cal Peyser and Sepand Mavandadi and Tara N. Sainath and James Apfel and Ruoming Pang and Shankar Kumar},
  title={{Improving Tail Performance of a Deliberation E2E ASR Model Using a Large Text Corpus}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4921--4925},
  doi={10.21437/Interspeech.2020-1465},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1465}
}