Auditory speech processing is facilitated when the talker's face and head movements (visual speech) can be seen. This effect occurs over a range of spoken word tasks, e.g., for spoken word identification (determining which word was presented) and for speech detection (determining whether speech was presented). This study examined the effect of providing two types of visual cue on the speed of determining whether a speech or non-speech sound was presented. Speech stimuli consisted of spoken nonwords and non-speech stimuli which were the spectrally inverted-versions of these. These stimuli were presented paired with either the talkerfs static or moving face. Two types of moving face stimuli were used: full-face versions where both spoken form and timing cues were available and modified face versions where only the timing cues provided by peri-oral motion were available (i.e., the mouth area was obscured). The results showed that the peri-oral timing cues facilitated response time for both speech and non-speech stimuli (compared to the static face condition). An additional facilitatory effect was found for the full-face versions (compared to the peri-oral timing cue condition) but this effect only occurred for the speech stimuli. The different roles these cues play in speech processing are discussed.
Bibliographic reference. Davis, Chris / Kim, Jeesun (2013): "The effect of visual speech timing and form cues on the processing of speech and nonspeech", In INTERSPEECH-2013, 1639-1642.