Perceptimatic: A Human Speech Perception Benchmark for Unsupervised Subword Modelling

Juliette Millet, Ewan Dunbar


In this paper, we present a data set and methods to compare speech processing models and human behaviour on a phone discrimination task. We provide Perceptimatic, an open data set which consists of French and English speech stimuli, as well as the results of 91 English- and 93 French-speaking listeners. The stimuli test a wide range of French and English contrasts, and are extracted directly from corpora of natural running read speech, used for the 2017 Zero Resource Speech Challenge. We provide a method to compare humans’ perceptual space with models’ representational space, and we apply it to models previously submitted to the Challenge. We show that, unlike unsupervised models and supervised multilingual models, a standard supervised monolingual HMM-GMM phone recognition system, while good at discriminating phones, yields a representational space very different from that of human native listeners.


 DOI: 10.21437/Interspeech.2020-1671

Cite as: Millet, J., Dunbar, E. (2020) Perceptimatic: A Human Speech Perception Benchmark for Unsupervised Subword Modelling. Proc. Interspeech 2020, 4881-4885, DOI: 10.21437/Interspeech.2020-1671.


@inproceedings{Millet2020,
  author={Juliette Millet and Ewan Dunbar},
  title={{Perceptimatic: A Human Speech Perception Benchmark for Unsupervised Subword Modelling}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4881--4885},
  doi={10.21437/Interspeech.2020-1671},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1671}
}