Contextual RNN-T for Open Domain ASR

Mahaveer Jain, Gil Keren, Jay Mahadeokar, Geoffrey Zweig, Florian Metze, Yatharth Saraf


End-to-end (E2E) systems for automatic speech recognition (ASR), such as RNN Transducer (RNN-T) and Listen-Attend-Spell (LAS) blend the individual components of a traditional hybrid ASR system — acoustic model, language model, pronunciation model — into a single neural network. While this has some nice advantages, it limits the system to be trained using only paired audio and text. Because of this, E2E models tend to have difficulties with correctly recognizing rare words that are not frequently seen during training, such as entity names. In this paper, we propose modifications to the RNN-T model that allow the model to utilize additional metadata text with the objective of improving performance on these named entity words. We evaluate our approach on an in-house dataset sampled from de-identified public social media videos, which represent an open domain ASR task. By using an attention model to leverage the contextual metadata that accompanies a video, we observe a relative improvement of about 16% in Word Error Rate on Named Entities (WER-NE) for videos with related metadata.


 DOI: 10.21437/Interspeech.2020-2986

Cite as: Jain, M., Keren, G., Mahadeokar, J., Zweig, G., Metze, F., Saraf, Y. (2020) Contextual RNN-T for Open Domain ASR. Proc. Interspeech 2020, 11-15, DOI: 10.21437/Interspeech.2020-2986.


@inproceedings{Jain2020,
  author={Mahaveer Jain and Gil Keren and Jay Mahadeokar and Geoffrey Zweig and Florian Metze and Yatharth Saraf},
  title={{Contextual RNN-T for Open Domain ASR}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={11--15},
  doi={10.21437/Interspeech.2020-2986},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2986}
}