Visualizing Phoneme Category Adaptation in Deep Neural Networks

Odette Scharenborg, Sebastian Tiesmeyer, Mark Hasegawa-Johnson, Najim Dehak

Both human listeners and machines need to adapt their sound categories whenever a new speaker is encountered. This perceptual learning is driven by lexical information. The aim of this paper is two-fold: investigate whether a deep neural network-based (DNN) ASR system can adapt to only a few examples of ambiguous speech as humans have been found to do; investigate a DNN’s ability to serve as a model of human perceptual learning. Crucially, we do so by looking at intermediate levels of phoneme category adaptation rather than at the output level. We visualize the activations in the hidden layers of the DNN during perceptual learning. The results show that, similar to humans, DNN systems learn speaker-adapted phone category boundaries from a few labeled examples. The DNN adapts its category boundaries not only by adapting the weights of the output layer, but also by adapting the implicit feature maps computed by the hidden layers, suggesting the possibility that human perceptual learning might involve a similar nonlinear distortion of a perceptual space that is intermediate between the acoustic input and the phonological categories. Comparisons between DNNs and humans can thus provide valuable insights into the way humans process speech and improve ASR technology.

 DOI: 10.21437/Interspeech.2018-1707

Cite as: Scharenborg, O., Tiesmeyer, S., Hasegawa-Johnson, M., Dehak, N. (2018) Visualizing Phoneme Category Adaptation in Deep Neural Networks. Proc. Interspeech 2018, 1482-1486, DOI: 10.21437/Interspeech.2018-1707.

  author={Odette Scharenborg and Sebastian Tiesmeyer and Mark Hasegawa-Johnson and Najim Dehak},
  title={Visualizing Phoneme Category Adaptation in Deep Neural Networks},
  booktitle={Proc. Interspeech 2018},