Multimodal Speech Emotion Recognition Using Cross Attention with Aligned Audio and Text

Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung


In this paper, we propose a novel speech emotion recognition model called Cross Attention Network (CAN) that uses aligned audio and text signals as inputs. It is inspired by the fact that humans recognize speech as a combination of simultaneously produced acoustic and textual signals. First, our method segments the audio and the underlying text signals into equal number of steps in an aligned way so that the same time steps of the sequential signals cover the same time span in the signals. Together with this technique, we apply the cross attention to aggregate the sequential information from the aligned signals. In the cross attention, each modality is aggregated independently by applying the global attention mechanism onto each modality. Then, the attention weights of each modality are applied directly to the other modality in a crossed way, so that the CAN gathers the audio and text information from the same time steps based on each modality. In the experiments conducted on the standard IEMOCAP dataset, our model outperforms the state-of-the-art systems by 2.66% and 3.18% relatively in terms of the weighted and unweighted accuracy.


 DOI: 10.21437/Interspeech.2020-2312

Cite as: Lee, Y., Yoon, S., Jung, K. (2020) Multimodal Speech Emotion Recognition Using Cross Attention with Aligned Audio and Text. Proc. Interspeech 2020, 2717-2721, DOI: 10.21437/Interspeech.2020-2312.


@inproceedings{Lee2020,
  author={Yoonhyung Lee and Seunghyun Yoon and Kyomin Jung},
  title={{Multimodal Speech Emotion Recognition Using Cross Attention with Aligned Audio and Text}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2717--2721},
  doi={10.21437/Interspeech.2020-2312},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2312}
}