Understanding Self-Attention of Self-Supervised Audio Transformers

Shu-wen Yang, Andy T. Liu, Hung-yi Lee


Self-supervised Audio Transformers (SAT) enable great success in many downstream speech applications like ASR, but how they work has not been widely explored yet. In this work, we present multiple strategies for the analysis of attention mechanisms in SAT. We categorize attentions into explainable categories, where we discover each category possesses its own unique functionality. We provide a visualization tool for understanding multi-head self-attention, importance ranking strategies for identifying critical attention, and attention refinement techniques to improve model performance.


 DOI: 10.21437/Interspeech.2020-2231

Cite as: Yang, S., Liu, A.T., Lee, H. (2020) Understanding Self-Attention of Self-Supervised Audio Transformers. Proc. Interspeech 2020, 3785-3789, DOI: 10.21437/Interspeech.2020-2231.


@inproceedings{Yang2020,
  author={Shu-wen Yang and Andy T. Liu and Hung-yi Lee},
  title={{Understanding Self-Attention of Self-Supervised Audio Transformers}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3785--3789},
  doi={10.21437/Interspeech.2020-2231},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2231}
}