Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition

Md. Asif Jalal, Rosanna Milner, Thomas Hain


Speech emotion recognition is essential for obtaining emotional intelligence which affects the understanding of context and meaning of speech. Harmonically structured vowel and consonant sounds add indexical and linguistic cues in spoken information. Previous research argued whether vowel sound cues were more important in carrying the emotional context from a psychological and linguistic point of view. Other research also claimed that emotion information could exist in small overlapping acoustic cues. However, these claims are not corroborated in computational speech emotion recognition systems. In this research, a convolution-based model and a long-short-term memory-based model, both using attention, are applied to investigate these theories of speech emotion on computational models. The role of acoustic context and word importance is demonstrated for the task of speech emotion recognition. The IEMOCAP corpus is evaluated by the proposed models, and 80.1% unweighted accuracy is achieved on pure acoustic data which is higher than current state-of-the-art models on this task. The phones and words are mapped to the attention vectors and it is seen that the vowel sounds are more important for defining emotion acoustic cues than the consonants, and the model can assign word importance based on acoustic context.


 DOI: 10.21437/Interspeech.2020-3007

Cite as: Jalal, M.A., Milner, R., Hain, T. (2020) Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition. Proc. Interspeech 2020, 4113-4117, DOI: 10.21437/Interspeech.2020-3007.


@inproceedings{Jalal2020,
  author={Md. Asif Jalal and Rosanna Milner and Thomas Hain},
  title={{Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4113--4117},
  doi={10.21437/Interspeech.2020-3007},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3007}
}