TMT: A Transformer-Based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-Aware Dialog

Wubo Li, Dongwei Jiang, Wei Zou, Xiangang Li


Audio Visual Scene-aware Dialog (AVSD) is a task to generate responses when discussing about a given video. The previous state-of-the-art model shows superior performance for this task using Transformer-based architecture. However, there remain some limitations in learning better representation of modalities. Inspired by Neural Machine Translation (NMT), we propose the Transformer-based Modal Translator (TMT) to learn the representations of the source modal sequence by translating the source modal sequence to the related target modal sequence in a supervised manner. Based on Multimodal Transformer Networks (MTN), we apply TMT to video and dialog, proposing MTN-TMT for the video-grounded dialog system. On the AVSD track of the Dialog System Technology Challenge 7, MTN-TMT outperforms the MTN and other submission models in both Video and Text task and Text Only task. Compared with MTN, MTN-TMT improves all metrics, especially, achieving relative improvement up to 14.1% on CIDEr.


 DOI: 10.21437/Interspeech.2020-2359

Cite as: Li, W., Jiang, D., Zou, W., Li, X. (2020) TMT: A Transformer-Based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-Aware Dialog. Proc. Interspeech 2020, 3501-3505, DOI: 10.21437/Interspeech.2020-2359.


@inproceedings{Li2020,
  author={Wubo Li and Dongwei Jiang and Wei Zou and Xiangang Li},
  title={{TMT: A Transformer-Based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-Aware Dialog}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3501--3505},
  doi={10.21437/Interspeech.2020-2359},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2359}
}