Auditory-Visual Speech Processing (AVSP) 2013
In a series of experiments we showed that the McGurk effect may be modulated by context: applying incoherent auditory and visual material before an audiovisual target made of an audio ba and a video ga significantly decreases the McGurk effect. We interpreted this as showing the existence of an audiovisual binding stage controlling the fusion process. Incoherence would produce unbinding and result in decreasing the weight of the visual input in the fusion process. In this study, we further explore this binding stage around two experiments. Firstly we test the rebinding process, by presenting a short period of either coherent material or silence after the incoherent unbinding context. We show that coherence provides rebinding, resulting in a recovery of the McGurk effect. In contrary, silence provides no rebinding and hence freezes the unbinding process, resulting in no recovery of the McGurk effect. Capitalizing on this result, in a second experiment including an incoherent unbinding context followed by a coherent rebinding context before the target, we add noise all over the contextual period, though not in the McGurk target. It appears that noise uniformly increases the rate of McGurk responses compared to the silent condition. This suggests that contextual noise increases the weight of the visual input in fusion, even if there is no noise within the target stimulus where fusion is applied. We conclude on the role of audiovisual coherence and noise in the binding process, in the framework of audiovisual speech scene analysis and the cocktail party effect.
Index Terms: audiovisual speech perception, McGurk effect, unbinding, rebinding, perception in noise
Bibliographic reference. Nahorna, Olha / Chandrashekara, Ganesh Attigodu / Berthommier, Frédéric / Schwartz, Jean Luc (2013): "Modulating fusion in the McGurk effect by binding processes and contextual noise", In AVSP-2013, 181-186.