Speech Prosody 2010
Chicago, IL, USA
This paper is part of a larger study that examines cross-linguistic perception of sad and happy speech when the information is transmitted semantically (linguistic) or prosodically (affective). Here we examine American English and Japanese speakers' ability to perceive emotions in Japanese utterances. It is expected that native subjects will be better at perceiving emotion expressed semantically than non-natives because they have access to the semantic information. However, we see that Japanese listeners like American English listeners were not successful in discriminating emotion in the semantic content of the utterance. Both native speakers and non-native speakers could perceive that a speaker is sad or happy through the affective prosody. These results show that sad and happy are universally expressed the same way even in the auditory modality. Acoustic analysis showed differences in intensity, morae duration and F0 range for the linguistic, affective and neutral utterances and sad, happy and neutral emotions. Linguistic utterances revealed acoustic differences between the three emotional stages besides differences in the semantic context. Index Terms: perception of emotion, affective, semantic, cross-linguistic, sad and happy
Bibliographic reference. Menezes, Caroline / Erickson, Donna / Franks, Clayton (2010): "Comparison between linguistic and affective perception of sad and happy a cross-linguistic study", In SP-2010, paper 220.