INTERSPEECH 2006 - ICSLP
The purpose of this study was to examine typically developing infants’ integration of audio-visual sensory information as a fundamental process involved in early word learning. One hundred sixty pre-linguistic children were randomly assigned to watch one of four counterbalanced versions of audio-visual video sequences. The infants’ eye-movements were recorded and their looking behavior was analyzed throughout three repetitions of exposure-test-phases. The results indicate that the infants were able to learn covariance between shapes and colors of arbitrary geometrical objects and to them corresponding nonsense words. Implications of audio-visual integration in infants and in non-human animals for modeling within speech recognition systems, neural networks and robotics are discussed.
Bibliographic reference. Klintfors, Eeva / Lacerda, Francisco (2006): "Potential relevance of audio-visual integration in mammals for computational modeling", In INTERSPEECH-2006, paper 1992-Tue3CaP.13.