Improved Noisy Student Training for Automatic Speech Recognition

Daniel S. Park, Yu Zhang, Ye Jia, Wei Han, Chung-Cheng Chiu, Bo Li, Yonghui Wu, Quoc V. Le

Recently, a semi-supervised learning method known as “noisy student training” has been shown to improve image classification performance of deep networks significantly. Noisy student training is an iterative self-training method that leverages augmentation to improve network performance. In this work, we adapt and improve noisy student training for automatic speech recognition, employing (adaptive) SpecAugment as the augmentation method. We find effective methods to filter, balance and augment the data generated in between self-training iterations. By doing so, we are able to obtain word error rates (WERs) 4.2%/8.6% on the clean/noisy LibriSpeech test sets by only using the clean 100h subset of LibriSpeech as the supervised set and the rest (860h) as the unlabeled set. Furthermore, we are able to achieve WERs 1.7%/3.4% on the clean/noisy LibriSpeech test sets by using the unlab-60k subset of LibriLight as the unlabeled set for LibriSpeech 960h. We are thus able to improve upon the previous state-of-the-art clean/noisy test WERs achieved on LibriSpeech 100h (4.74%/12.20%) and LibriSpeech (1.9%/4.1%).

 DOI: 10.21437/Interspeech.2020-1470

Cite as: Park, D.S., Zhang, Y., Jia, Y., Han, W., Chiu, C., Li, B., Wu, Y., Le, Q.V. (2020) Improved Noisy Student Training for Automatic Speech Recognition. Proc. Interspeech 2020, 2817-2821, DOI: 10.21437/Interspeech.2020-1470.

  author={Daniel S. Park and Yu Zhang and Ye Jia and Wei Han and Chung-Cheng Chiu and Bo Li and Yonghui Wu and Quoc V. Le},
  title={{Improved Noisy Student Training for Automatic Speech Recognition}},
  booktitle={Proc. Interspeech 2020},