Multimodal emotional speech perception

Sonja Kotz


Social interactions rely on multiple verbal and non-verbal information sources and their interaction. Crucially, in such communicative interactions we can obtain information about the current emotional state of others (‘what’) but also about the timing of these information sources (‘when’). However, the perception and integration of multiple emotion expressions is prone to environmental noise and may be influenced by a specific situational context or learned knowledge. In our work on the temporal and neural correlates of multimodal emotion expressions we address a number of questions by means of ERPs and fMRI within a predictive coding framework. In my talk I will focus on the following questions: (1) How do we integrate verbal and non-verbal emotion expressions; (2) How does noise affect the integration of multiple emotion expressions; (3) How do cognitive demands impact the processing of multimodal emotion expressions; (4) How do we resolve interferences between verbal and non-verbal emotion expressions?


Cite as: Kotz, S. (2018) Multimodal emotional speech perception. Proc. 9th International Conference on Speech Prosody 2018.


@inproceedings{Kotz2018,
  author={Sonja Kotz},
  title={Multimodal emotional speech perception},
  year=2018,
  booktitle={Proc. 9th International Conference on Speech Prosody 2018}
}