Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization?

Gilbert Ambrazaitis, David House


The interplay of verbal and visual prominence cues has attracted recent attention, but previous findings are inconclusive as to whether and how the two modalities are integrated in the production and perception of prominence. In particular, we do not know whether the phonetic realization of pitch accents is influenced by co-speech beat gestures, and previous findings seem to generate different predictions. In this study, we investigate acoustic properties of prominent words as a function of visual beat gestures in a corpus of read news from Swedish television. The corpus was annotated for head and eyebrow beats as well as sentence- level pitch accents. Four types of prominence cues occurred particularly frequently in the corpus: (1) pitch accent only, (2) pitch accent plus head, (3) pitch accent plus head plus eyebrows, and (4) head only. The results show that (4) differs from (1-3) in terms of a smaller pitch excursion and shorter syllable duration. They also reveal significantly larger pitch excursions in (2) than in (1), suggesting that the realization of a pitch accent is to some extent influenced by the presence of visual prominence cues. Results are discussed in terms of the interaction between beat gestures and prosody with a potential functional difference between head and eyebrow beats.


 DOI: 10.21437/AVSP.2017-17

Cite as: Ambrazaitis, G., House, D. (2017) Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization?. Proc. The 14th International Conference on Auditory-Visual Speech Processing, 89-94, DOI: 10.21437/AVSP.2017-17.


@inproceedings{Ambrazaitis2017,
  author={Gilbert Ambrazaitis and David House},
  title={ Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization?},
  year=2017,
  booktitle={Proc. The 14th International Conference on Auditory-Visual Speech Processing},
  pages={89--94},
  doi={10.21437/AVSP.2017-17},
  url={http://dx.doi.org/10.21437/AVSP.2017-17}
}