Integrating Recurrence Dynamics for Speech Emotion Recognition

Efthymios Tzinis, Georgios Paraskevopoulos, Christos Baziotis, Alexandros Potamianos

We investigate the performance of features that can capture nonlinear recurrence dynamics embedded in the speech signal for the task of Speech Emotion Recognition (SER). Reconstruction of the phase space of each speech frame and the computation of its respective Recurrence Plot (RP) reveals complex structures which can be measured by performing Recurrence Quantification Analysis (RQA). These measures are aggregated by using statistical functionals over segment and utterance periods. We report SER results for the proposed feature set on three databases using different classification methods. When fusing the proposed features with traditional feature sets, e.g., [1], we show an improvement in unweighted accuracy of up to 5.7% and 10.7% on Speaker-Dependent (SD) and Speaker-Independent (SI) SER tasks, respectively, over the baseline [1]. Following a segment-based approach we demonstrate state-of-the-art performance on IEMOCAP using a Bidirectional Recurrent Neural Network.

 DOI: 10.21437/Interspeech.2018-1377

Cite as: Tzinis, E., Paraskevopoulos, G., Baziotis, C., Potamianos, A. (2018) Integrating Recurrence Dynamics for Speech Emotion Recognition. Proc. Interspeech 2018, 927-931, DOI: 10.21437/Interspeech.2018-1377.

  author={Efthymios Tzinis and Georgios Paraskevopoulos and Christos Baziotis and Alexandros Potamianos},
  title={Integrating Recurrence Dynamics for Speech Emotion Recognition},
  booktitle={Proc. Interspeech 2018},