Class LM and Word Mapping for Contextual Biasing in End-to-End ASR

Rongqing Huang, Ossama Abdel-hamid, Xinwei Li, Gunnar Evermann


In recent years, all-neural, end-to-end (E2E) ASR systems gained rapid interest in the speech recognition community. They convert speech input to text units in a single trainable Neural Network model. In ASR, many utterances contain rich named entities. Such named entities may be user or location specific and they are not seen during training. A single model makes it inflexible to utilize dynamic contextual information during inference. In this paper, we propose to train a context aware E2E model and allow the beam search to traverse into the context FST during inference. We also propose a simple method to adjust the cost discrepancy between the context FST and the base model. This algorithm is able to reduce the named entity utterance WER by 57% with little accuracy degradation on regular utterances. Although an E2E model does not need a pronunciation dictionary, it’s interesting to make use of existing pronunciation knowledge to improve accuracy. In this paper, we propose an algorithm to map the rare entity words to common words via pronunciation and treat the mapped words as an alternative form to the original word during recognition. This algorithm further reduces the WER on the named entity utterances by another 31%.


 DOI: 10.21437/Interspeech.2020-1787

Cite as: Huang, R., Abdel-hamid, O., Li, X., Evermann, G. (2020) Class LM and Word Mapping for Contextual Biasing in End-to-End ASR. Proc. Interspeech 2020, 4348-4351, DOI: 10.21437/Interspeech.2020-1787.


@inproceedings{Huang2020,
  author={Rongqing Huang and Ossama Abdel-hamid and Xinwei Li and Gunnar Evermann},
  title={{Class LM and Word Mapping for Contextual Biasing in End-to-End ASR}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4348--4351},
  doi={10.21437/Interspeech.2020-1787},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1787}
}