Subword Regularization: An Analysis of Scalability and Generalization for End-to-End Automatic Speech Recognition

Egor Lakomkin, Jahn Heymann, Ilya Sklyar, Simon Wiesler


Subwords are the most widely used output units in end-to-end speech recognition. They combine the best of two worlds by modeling the majority of frequent words directly and at the same time allow open vocabulary speech recognition by backing off to shorter units or characters to construct words unseen during training. However, mapping text to subwords is ambiguous and often multiple segmentation variants are possible. Yet, many systems are trained using only the most likely segmentation. Recent research suggests that sampling subword segmentations during training acts as a regularizer for neural machine translation and speech recognition models, leading to performance improvements. In this work, we conduct a principled investigation on the regularizing effect of the subword segmentation sampling method for a streaming end-to-end speech recognition task. In particular, we evaluate the subword regularization contribution depending on the size of the training dataset. Our results suggest that subword regularization provides a consistent improvement of 2–8% relative word-error-rate reduction, even in a large-scale setting with datasets up to a size of 20k hours. Further, we analyze the effect of subword regularization on recognition of unseen words and its implications on beam diversity.


 DOI: 10.21437/Interspeech.2020-1569

Cite as: Lakomkin, E., Heymann, J., Sklyar, I., Wiesler, S. (2020) Subword Regularization: An Analysis of Scalability and Generalization for End-to-End Automatic Speech Recognition. Proc. Interspeech 2020, 3600-3604, DOI: 10.21437/Interspeech.2020-1569.


@inproceedings{Lakomkin2020,
  author={Egor Lakomkin and Jahn Heymann and Ilya Sklyar and Simon Wiesler},
  title={{Subword Regularization: An Analysis of Scalability and Generalization for End-to-End Automatic Speech Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3600--3604},
  doi={10.21437/Interspeech.2020-1569},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1569}
}