A Transformer-Based Audio Captioning Model with Keyword Estimation

Yuma Koizumi, Ryo Masumura, Kyosuke Nishida, Masahiro Yasuda, Shoichiro Saito

One of the problems with automated audio captioning (AAC) is the indeterminacy in word selection corresponding to the audio event/scene. Since one acoustic event/scene can be described with several words, it results in a combinatorial explosion of possible captions and difficulty in training. To solve this problem, we propose a Transformer-based audio-captioning model with keyword estimation called TRACKE. It simultaneously solves the word-selection indeterminacy problem with the main task of AAC while executing the sub-task of acoustic event detection/acoustic scene classification (i.e., keyword estimation). TRACKE estimates keywords, which comprise a word set corresponding to audio events/scenes in the input audio, and generates the caption while referring to the estimated keywords to reduce word-selection indeterminacy. Experimental results on a public AAC dataset indicate that TRACKE achieved state-of-the-art performance and successfully estimated both the caption and its keywords.

 DOI: 10.21437/Interspeech.2020-2087

Cite as: Koizumi, Y., Masumura, R., Nishida, K., Yasuda, M., Saito, S. (2020) A Transformer-Based Audio Captioning Model with Keyword Estimation. Proc. Interspeech 2020, 1977-1981, DOI: 10.21437/Interspeech.2020-2087.

  author={Yuma Koizumi and Ryo Masumura and Kyosuke Nishida and Masahiro Yasuda and Shoichiro Saito},
  title={{A Transformer-Based Audio Captioning Model with Keyword Estimation}},
  booktitle={Proc. Interspeech 2020},