Caption Alignment for Low Resource Audio-Visual Data

Vighnesh Reddy Konda, Mayur Warialani, Rakesh Prasanth Achari, Varad Bhatnagar, Jayaprakash Akula, Preethi Jyothi, Ganesh Ramakrishnan, Gholamreza Haffari, Pankaj Singh

Understanding videos via captioning has gained a lot of traction recently. While captions are provided alongside videos, the information about where a caption aligns within a video is missing, which could be particularly useful for indexing and retrieval. Existing work on learning to infer alignments has mostly exploited visual features and ignored the audio signal. Video understanding applications often underestimate the importance of the audio modality. We focus on how to make effective use of the audio modality for temporal localization of captions within videos. We release a new audio-visual dataset that has captions time-aligned by (i) carefully listening to the audio and watching the video, and (ii) watching only the video. Our dataset is audio-rich and contains captions in two languages, English and Marathi (a low-resource language). We further propose an attention-driven multimodal model, for effective utilization of both audio and video for temporal localization. We then investigate (i) the effects of audio in both data preparation and model design, and (ii) effective pretraining strategies (Audioset, ASR-bottleneck features, PASE, etc.) handling low-resource setting to help extract rich audio representations.

 DOI: 10.21437/Interspeech.2020-3157

Cite as: Konda, V.R., Warialani, M., Achari, R.P., Bhatnagar, V., Akula, J., Jyothi, P., Ramakrishnan, G., Haffari, G., Singh, P. (2020) Caption Alignment for Low Resource Audio-Visual Data. Proc. Interspeech 2020, 3525-3529, DOI: 10.21437/Interspeech.2020-3157.

  author={Vighnesh Reddy Konda and Mayur Warialani and Rakesh Prasanth Achari and Varad Bhatnagar and Jayaprakash Akula and Preethi Jyothi and Ganesh Ramakrishnan and Gholamreza Haffari and Pankaj Singh},
  title={{Caption Alignment for Low Resource Audio-Visual Data}},
  booktitle={Proc. Interspeech 2020},