Auditory-Visual Speech Processing (AVSP) 2010
Hakone, Kanagawa, Japan
Previous studies on tongue reading, i.e., speech perception of degraded audio supported by animations of tongue movements have indicated that the support is weak initially and that subjects need training to learn to interpret the movements. This paper investigates if the learning is of the animation templates as such or if subjects learn to retrieve articulatory knowledge that they already have. Matching and conflicting animations of tongue movements were presented randomly together with the auditory speech signal at three different levels of noise in a consonant identification test. The average recognition rate over the three noise levels was significantly higher for the matched audiovisual condition than for the conflicting and the auditory only. Audiovisual integration effects were also found for conflicting stimuli. However, the visual modality is given much less weight in the perception than for a normal face view, and intersubject differences in the use of visual information are large.
Index Terms: McGurk, audiovisual speech perception, augmented reality
Bibliographic reference. Engwall, Olov (2010): "Is there a mcgurk effect for tongue reading?", In AVSP-2010, paper S2-2.