Speech Prosody 2012
This paper is a cross-cultural perception study of speech emotions of English utterances by American, Japanese and Korean listeners. The perception of sad and happy speech conveyed through linguistic modality (semantic) and affective modality (prosody) is tested to understand how native and non-native listeners comprehend the speaker's emotion. It is expected that native subjects would be better than non-natives at perceiving emotion expressed in both modalities because of their competence in accessing the semantic information as well as emotional prosodic information. Results reveal that in general, Americans perceive emotion in English better than Japanese and Koreans. However, native listeners and nonnative Japanese listeners are more successful in discriminating emotion in affective and neutral utterances. Korean speakers are better at perceiving emotions in linguistic utterances. This could be due to English being taught as a second language in many countries. Our findings also indicate that listener's choice of modality processing is based on emotion types. Happy utterances are better perceived in the affective modality, while sadness is better perceived in the linguistic modality. Results also show that females are better at judging emotion by affective prosody, while males need to heed the semantic coding of emotion. Happy utterances are better perceived by males while sad utterances are better perceived by females. Females in general are better at perceiving emotions than males for all language groups.
Index Terms: perception of emotion, affective, semantic, cross-cultural, sad and happy, gender prototypes
Bibliographic reference. Menezes, Caroline / Erickson, Donna / Han, Jonghye (2012): "Cross-linguistic cross-modality perception of English sad and happy speech", In SP-2012, 649-652.