Auditory-Visual Speech Processing 2007 (AVSP2007)
Kasteel Groenendaal, Hilvarenbeek, The Netherlands
Recent research had shown that concurrent visual speech modulates the cortical event-related potential N1/P2 to auditory speech. Audiovisually presented speech results in an N1-P2 that is reduced in peak amplitude and with shorter peak latencies than unimodal auditory speech . This effect on the N1/P2 is consistent with a model in which visual speech integrates with auditory speech at an early processing stage in the auditory cortex by suppressing auditory cortical activity. We examined the effects of audiovisual temporal synchrony in producing modulations in the N1/P2. With the visual stream presented in synchrony with the auditory stream our results replicated the basic findings of reduced peak amplitudes in the N1/P2 compared to a unimodal auditory condition. With the visual stream temporally mismatched with the auditory stream (so that the auditory speech signal was presented 200 ms before its recorded position) the recorded N1/P2 was similar to unimodal auditory speech. The results are discussed in terms of Wassenhove’s ‘analysis-by-synthesis model’ of audiovisual integration.
Bibliographic reference. Pilling, Michael / Thomas, Sharon (2007): "Temporal factors in the electrophysiological markers of audiovisual speech integration", In AVSP-2007, paper P10.