Intra-Utterance Similarity Preserving Knowledge Distillation for Audio Tagging

Chun-Chieh Chang, Chieh-Chi Kao, Ming Sun, Chao Wang


Knowledge Distillation (KD) is a popular area of research for reducing the size of large models while still maintaining good performance. The outputs of larger teacher models are used to guide the training of smaller student models. Given the repetitive nature of acoustic events, we propose to leverage this information to regulate the KD training for Audio Tagging. This novel KD method, Intra-Utterance Similarity Preserving KD (IUSP), shows promising results for the audio tagging task. It is motivated by the previously published KD method: Similarity Preserving KD (SP). However, instead of preserving the pairwise similarities between inputs within a mini-batch, our method preserves the pairwise similarities between the frames of a single input utterance. Our proposed KD method, IUSP, shows consistent improvements over SP across student models of different sizes on the DCASE 2019 Task 5 dataset for audio tagging. There is a 27.1% to 122.4% percent increase in improvement of micro AUPRC over the baseline relative to SPs improvement of over the baseline.


 DOI: 10.21437/Interspeech.2020-2835

Cite as: Chang, C., Kao, C., Sun, M., Wang, C. (2020) Intra-Utterance Similarity Preserving Knowledge Distillation for Audio Tagging. Proc. Interspeech 2020, 851-855, DOI: 10.21437/Interspeech.2020-2835.


@inproceedings{Chang2020,
  author={Chun-Chieh Chang and Chieh-Chi Kao and Ming Sun and Chao Wang},
  title={{Intra-Utterance Similarity Preserving Knowledge Distillation for Audio Tagging}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={851--855},
  doi={10.21437/Interspeech.2020-2835},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2835}
}