Auditory-Visual Speech Processing
A single-case study was carried out on a patient (KB), who presented with "aprosodia" following a right hemisphere stroke, to explore the cross-modal integration of auditory and visual cues in prosodic speech perception. KB was tested on two prosodic speech perception tasks: sentence intonation categorization (i.e., statement or question) and emphatic stress categorization (i.e., first or second noun was stressed). In addition, he was tested on two segmental speech perception tasks: McGurk Task and speech-in-noise. In all tasks, there were three presentation conditions: audio-only, visual-only, and audiovisual. Results showed that KB performed at about chance on both prosody perception tasks in all three presentation conditions. In contrast, he performed near ceiling in the visual-only and audiovisual conditions on both tasks of segmental speech perception. His performance on the speech-in-noise task showed that he was able to use visual information to compensate for impoverished auditory information in segmental speech perception. Also, his results on the McGurk task were indicative of cross-modal integration in segmental speech perception. The results suggest that, although KB's ability to process visual information in segmental speech tasks is intact, he is nonetheless unable to process prosodic speech information in either the auditory or visual modality.
Bibliographic reference. Nicholson, Karen / Baum, Shari / Cuddy, Lola / Munhall, Kevin (2001): "A case of multimodal aprosodia: impaired auditory and visual speech prosody perception in a patient with right hemisphere damage", In AVSP-2001, 62-65.