An Acoustic Segment Model Based Segment Unit Selection Approach to Acoustic Scene Classification with Partial Utterances

Hu Hu, Sabato Marco Siniscalchi, Yannan Wang, Xue Bai, Jun Du, Chin-Hui Lee


In this paper, we propose a sub-utterance unit selection framework to remove acoustic segments in audio recordings that carry little information for acoustic scene classification (ASC). Our approach is built upon a universal set of acoustic segment units covering the overall acoustic scene space. First, those units are modeled with acoustic segment models (ASMs) used to tokenize acoustic scene utterances into sequences of acoustic segment units. Next, paralleling the idea of stop words in information retrieval, stop ASMs are automatically detected. Finally, acoustic segments associated with the stop ASMs are blocked, because of their low indexing power in retrieval of most acoustic scenes. In contrast to building scene models with whole utterances, the ASM-removed sub-utterances, i.e., acoustic utterances without stop acoustic segments, are then used as inputs to the AlexNet-L back-end for final classification. On the DCASE 2018 dataset, scene classification accuracy increases from 68%, with whole utterances, to 72.1%, with segment selection. This represents a competitive accuracy without any data augmentation, and/or ensemble strategy. Moreover, our approach compares favourably to AlexNet-L with attention.


 DOI: 10.21437/Interspeech.2020-2044

Cite as: Hu, H., Siniscalchi, S.M., Wang, Y., Bai, X., Du, J., Lee, C. (2020) An Acoustic Segment Model Based Segment Unit Selection Approach to Acoustic Scene Classification with Partial Utterances. Proc. Interspeech 2020, 1201-1205, DOI: 10.21437/Interspeech.2020-2044.


@inproceedings{Hu2020,
  author={Hu Hu and Sabato Marco Siniscalchi and Yannan Wang and Xue Bai and Jun Du and Chin-Hui Lee},
  title={{An Acoustic Segment Model Based Segment Unit Selection Approach to Acoustic Scene Classification with Partial Utterances}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1201--1205},
  doi={10.21437/Interspeech.2020-2044},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2044}
}