Mixed Case Contextual ASR Using Capitalization Masks

Diamantino Caseiro, Pat Rondon, Quoc-Nam Le The, Petar Aleksic


End-to-end (E2E) mixed-case automatic speech recognition (ASR) systems that directly predict words in the written domain are attractive due to being simple to build, not requiring explicit capitalization models, allowing streaming capitalization without additional effort beyond that required for streaming ASR, and their small size. However, the fact that these systems produce various versions of the same word with different capitalizations, and even different word segmentations for different case variants when wordpieces (WP) are predicted, leads to multiple problems with contextual ASR. In particular, the size of and time to build contextual models grows considerably with the number of variants per word. In this paper, we propose separating orthographic recognition from capitalization, so that the ASR system first predicts a word, then predicts its capitalization in the form of a capitalization mask. We show that the use of capitalization masks achieves the same low error rate as traditional mixed-case ASR, while reducing the size and compilation time of contextual models. Furthermore, we observe significant improvements in capitalization quality.


 DOI: 10.21437/Interspeech.2020-2367

Cite as: Caseiro, D., Rondon, P., The, Q.L., Aleksic, P. (2020) Mixed Case Contextual ASR Using Capitalization Masks. Proc. Interspeech 2020, 686-690, DOI: 10.21437/Interspeech.2020-2367.


@inproceedings{Caseiro2020,
  author={Diamantino Caseiro and Pat Rondon and Quoc-Nam Le The and Petar Aleksic},
  title={{Mixed Case Contextual ASR Using Capitalization Masks}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={686--690},
  doi={10.21437/Interspeech.2020-2367},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2367}
}