Do Listeners Learn Better from Natural Speech?

Michael McAuliffe, Molly Babel, Charlotte Vaughn

Perceptual learning of novel pronunciations is a seemingly robust and efficient process for adapting to unfamiliar speech patterns. In this study we compare perceptual learning of /s/ words where a medially occurring /s/ is substituted with /ʃ/, rendering, for example, castle as /kæʃl/ instead of /kæsl/. Exposure to the novel pronunciations is presented in the guise of a lexical decision task. Perceptual learning is assessed in a categorization task where listeners are presented with minimal pair continua (e.g., sock-shock). Given recent suggestions that perceptual learning may be more robust with natural as opposed to synthesized speech, we compare perceptual learning in groups that either receive natural /s/-to-/ʃ/ words or resynthesized /s/-to-/ʃ/ words. Despite low word endorsement rates in the lexical decision task, both groups of listeners show robust generalization in perceptual learning to the novel minimal pair continua, thereby indicating that at least with high quality resynthesis, perceptual learning in natural and synthesized speech is roughly equivalent.

DOI: 10.21437/Interspeech.2016-610

Cite as

McAuliffe, M., Babel, M., Vaughn, C. (2016) Do Listeners Learn Better from Natural Speech?. Proc. Interspeech 2016, 2806-2810.

author={Michael McAuliffe and Molly Babel and Charlotte Vaughn},
title={Do Listeners Learn Better from Natural Speech?},
booktitle={Interspeech 2016},