Cross-cultural (A)symmetries in Audio-visual Attitude Perception

Hansjörg Mixdorff, Albert Rilliard, Tan Lee, Matthew K. H. Ma, Angelika Hönemann

This paper evaluates results from a cross-cultural and cross-language experiment series employing short audio-visual utterances produced with varying attitudinal expressions. German and Cantonese-speaking participants freely labeled such utterances in the two languages and assigned to each stimulus a verbal label. Based on the results of the four experiments we were able to establish to what degree the attitudinal frames of reference of the two groups overlap and how they differ. Verbal labels were assessed regarding their emotional content in terms of valence, activation and dominance and for the linguistic opposition between assertive and interrogative speech act and hence permit to abstract from the language of the rater and ultimately even abstract from the attitudinal categories used when eliciting the stimuli. Instead we regard each utterance as a data-point in the emotional space. We found that the judgments of the two rater groups agree well with respect to the valence of attitudinal expressions and diverge most as to the perceived activation of the stimulus presenter. Cantonese speaking participants seem to mirror Germans’ ratings of German stimuli better than vice versa, which suggests an interesting asymmetry of attitudinal perception. As for the modality of presentation, the audio channel primarily transmits linguistically relevant information regarding the opposition of assertion and interrogation while the visual information signals the emotional content.

 DOI: 10.21437/Interspeech.2018-1373

Cite as: Mixdorff, H., Rilliard, A., Lee, T., Ma, M.K.H., Hönemann, A. (2018) Cross-cultural (A)symmetries in Audio-visual Attitude Perception. Proc. Interspeech 2018, 426-430, DOI: 10.21437/Interspeech.2018-1373.

  author={Hansjörg Mixdorff and Albert Rilliard and Tan Lee and Matthew K. H. Ma and Angelika Hönemann},
  title={Cross-cultural (A)symmetries in Audio-visual Attitude Perception},
  booktitle={Proc. Interspeech 2018},