Third International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA 2003)

Florence, Italy
December 10-12, 2003

Automatic Analysis of Vocal Manifestations of Apparent Mood Or Affect

E. P. Rosenfeld (1), Dominic W. Massaro (2), J. Bernstein (1)

(1) Ordinate Corporation, Menlo Park, California, USA
(2) Department of Psychology, University of California at Santa Cruz, USA

Skilled clinicians are able to integrate linguistic, paralinguistic, and non-linguistic cues in the assessment of mood disorders. This project identified duration- and amplitude-based aspects of the speech signal that can be measured automatically by computer and which provide paralinguistic information about the apparent affect of a speech sample. A group of 40 experimental subjects produced 1584 spoken renditions of sentences, in 3 conditions, uninstructed, depressive, or manic. An automatic speech recognition system extracted 10 paralinguistic parameter values from each of these spoken responses. Psychotherapists have a relatively uniform model of depressive and manic speech patterns, which shows up in distinct paralinguistic features of their speech when simulating these states. Several features are significantly different in the three simulated emotional states and these features can be detected automatically.

Index Terms. automatic, speech recognition, mood, affect.

Full Paper (reprinted with permission from Firenze University Press)

Bibliographic reference.  Rosenfeld, E. P. / Massaro, Dominic W. / Bernstein, J. (2003): "Automatic analysis of vocal manifestations of apparent mood or affect", In MAVEBA-2003, 5-8.