EigenEmo: Spectral Utterance Representation Using Dynamic Mode Decomposition for Speech Emotion Classification

Shuiyang Mao, P.C. Ching, Tan Lee


Human emotional speech is, by its very nature, a variant signal. This results in dynamics intrinsic to automatic emotion classification based on speech. In this work, we explore a spectral decomposition method stemming from fluid-dynamics, known as Dynamic Mode Decomposition (DMD), to computationally represent and analyze the global utterance-level dynamics of emotional speech. Specifically, segment-level emotion-specific representations are first learned through an Emotion Distillation process. This forms a multi-dimensional signal of emotion flow for each utterance, called Emotion Profiles (EPs). The DMD algorithm is then applied to the resultant EPs to capture the eigenfrequencies, and hence the fundamental transition dynamics of the emotion flow. Evaluation experiments using the proposed approach, which we call EigenEmo, show promising results. Moreover, due to the positive combination of their complementary properties, concatenating the utterance representations generated by EigenEmo with simple EPs averaging yields noticeable gains.


 DOI: 10.21437/Interspeech.2020-1762

Cite as: Mao, S., Ching, P., Lee, T. (2020) EigenEmo: Spectral Utterance Representation Using Dynamic Mode Decomposition for Speech Emotion Classification. Proc. Interspeech 2020, 2352-2356, DOI: 10.21437/Interspeech.2020-1762.


@inproceedings{Mao2020,
  author={Shuiyang Mao and P.C. Ching and Tan Lee},
  title={{EigenEmo: Spectral Utterance Representation Using Dynamic Mode Decomposition for Speech Emotion Classification}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2352--2356},
  doi={10.21437/Interspeech.2020-1762},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1762}
}