A Multi-Scale Fusion Framework for Bimodal Speech Emotion Recognition

Ming Chen, Xudong Zhao

Speech emotion recognition (SER) is a challenging task that requires to learn suitable features for achieving good performance. The development of deep learning techniques makes it possible to automatically extract features rather than construct hand-crafted features. In this paper, a multi-scale fusion framework named STSER is proposed for bimodal SER by using speech and text information. A smodel, which takes advantage of convolutional neural network (CNN), bi-directional long short-term memory (Bi-LSTM) and the attention mechanism, is proposed to learn speech representation from the log-mel spectrogram extracted from speech data. Specifically, the CNN layers are utilized to learn local correlations. Then the Bi-LSTM layer is applied to learn long-term dependencies and contextual information. Finally, the multi-head self-attention layer makes the model focus on the features that are most related to the emotions. A tmodel using a pre-trained ALBERT model is applied for learning text representation from text data. Finally, a multi-scale fusion strategy, including feature fusion and ensemble learning, is applied to improve the overall performance. Experiments conducted on the public emotion dataset IEMOCAP have shown that the proposed STSER can achieve comparable recognition accuracy with fewer feature inputs.

 DOI: 10.21437/Interspeech.2020-3156

Cite as: Chen, M., Zhao, X. (2020) A Multi-Scale Fusion Framework for Bimodal Speech Emotion Recognition. Proc. Interspeech 2020, 374-378, DOI: 10.21437/Interspeech.2020-3156.

  author={Ming Chen and Xudong Zhao},
  title={{A Multi-Scale Fusion Framework for Bimodal Speech Emotion Recognition}},
  booktitle={Proc. Interspeech 2020},