Independent and Automatic Evaluation of Speaker-Independent Acoustic-to-Articulatory Reconstruction

Maud Parrot, Juliette Millet, Ewan Dunbar


Reconstruction of articulatory trajectories from the acoustic speech signal has been proposed for improving speech recognition and text-to-speech synthesis. However, to be useful in these settings, articulatory reconstruction must be speaker-independent. Furthermore, as most research focuses on single, small data sets with few speakers, robust articulatory reconstruction could profit from combining data sets. Standard evaluation measures such as root mean squared error and Pearson correlation are inappropriate for evaluating the speaker-independence of models or the usefulness of combining data sets. We present a new evaluation for articulatory reconstruction which is independent of the articulatory data set used for training: the phone discrimination ABX task. We use the ABX measure to evaluate a bi-LSTM based model trained on three data sets (14 speakers), and show that it gives information complementary to standard measures, enabling us to evaluate the effects of data set merging, as well as the speaker independence of the model.


 DOI: 10.21437/Interspeech.2020-1746

Cite as: Parrot, M., Millet, J., Dunbar, E. (2020) Independent and Automatic Evaluation of Speaker-Independent Acoustic-to-Articulatory Reconstruction. Proc. Interspeech 2020, 3740-3744, DOI: 10.21437/Interspeech.2020-1746.


@inproceedings{Parrot2020,
  author={Maud Parrot and Juliette Millet and Ewan Dunbar},
  title={{Independent and Automatic Evaluation of Speaker-Independent Acoustic-to-Articulatory Reconstruction}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3740--3744},
  doi={10.21437/Interspeech.2020-1746},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1746}
}