Environmental Sound Classification with Parallel Temporal-Spectral Attention

Helin Wang, Yuexian Zou, Dading Chong, Wenwu Wang


Convolutional neural networks (CNN) are one of the best-performing neural network architectures for environmental sound classification (ESC). Recently, temporal attention mechanisms have been used in CNN to capture the useful information from the relevant time frames for audio classification, especially for weakly labelled data where the onset and offset times of the sound events are not applied. In these methods, however, the inherent spectral characteristics and variations are not explicitly exploited when obtaining the deep features. In this paper, we propose a novel parallel temporal-spectral attention mechanism for CNN to learn discriminative sound representations, which enhances the temporal and spectral features by capturing the importance of different time frames and frequency bands. Parallel branches are constructed to allow temporal attention and spectral attention to be applied respectively in order to mitigate interference from the segments without the presence of sound events. The experiments on three environmental sound classification (ESC) datasets and two acoustic scene classification (ASC) datasets show that our method improves the classification performance and also exhibits robustness to noise.


 DOI: 10.21437/Interspeech.2020-1219

Cite as: Wang, H., Zou, Y., Chong, D., Wang, W. (2020) Environmental Sound Classification with Parallel Temporal-Spectral Attention. Proc. Interspeech 2020, 821-825, DOI: 10.21437/Interspeech.2020-1219.


@inproceedings{Wang2020,
  author={Helin Wang and Yuexian Zou and Dading Chong and Wenwu Wang},
  title={{Environmental Sound Classification with Parallel Temporal-Spectral Attention}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={821--825},
  doi={10.21437/Interspeech.2020-1219},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1219}
}