Weak-Attention Suppression for Transformer Based Speech Recognition

Yangyang Shi, Yongqiang Wang, Chunyang Wu, Christian Fuegen, Frank Zhang, Duc Le, Ching-Feng Yeh, Michael L. Seltzer

Transformers, originally proposed for natural language processing (NLP) tasks, have recently achieved great success in automatic speech recognition (ASR). However, adjacent acoustic units (i.e., frames) are highly correlated, and long-distance dependencies between them are weak, unlike text units. It suggests that ASR will likely benefit from sparse and localized attention. In this paper, we propose Weak-Attention Suppression (WAS), a method that dynamically induces sparsity in attention probabilities. We demonstrate that WAS leads to consistent Word Error Rate (WER) improvement over strong transformer baselines. On the widely used LibriSpeech benchmark, our proposed method reduced WER by 10% on test-clean and 5% on test-other for streamable transformers, resulting in a new state-of-the-art among streaming models. Further analysis shows that WAS learns to suppress attention of non-critical and redundant continuous acoustic frames, and is more likely to suppress past frames rather than future ones. It indicates the importance of lookahead in attention-based ASR models.

 DOI: 10.21437/Interspeech.2020-1363

Cite as: Shi, Y., Wang, Y., Wu, C., Fuegen, C., Zhang, F., Le, D., Yeh, C., Seltzer, M.L. (2020) Weak-Attention Suppression for Transformer Based Speech Recognition. Proc. Interspeech 2020, 4996-5000, DOI: 10.21437/Interspeech.2020-1363.

  author={Yangyang Shi and Yongqiang Wang and Chunyang Wu and Christian Fuegen and Frank Zhang and Duc Le and Ching-Feng Yeh and Michael L. Seltzer},
  title={{Weak-Attention Suppression for Transformer Based Speech Recognition}},
  booktitle={Proc. Interspeech 2020},