Deep MOS Predictor for Synthetic Speech Using Cluster-Based Modeling

Yeunju Choi, Youngmoon Jung, Hoirin Kim


While deep learning has made impressive progress in speech synthesis and voice conversion, the assessment of the synthesized speech is still carried out by human participants. Several recent papers have proposed deep-learning-based assessment models and shown the potential to automate the speech quality assessment. To improve the previously proposed assessment model, MOSNet, we propose three models using cluster-based modeling methods: using a global quality token (GQT) layer, using an Encoding Layer, and using both of them. We perform experiments using the evaluation results of the Voice Conversion Challenge 2018 to predict the mean opinion score of synthesized speech and similarity score between synthesized speech and reference speech. The results show that the GQT layer helps to predict human assessment better by automatically learning the useful quality tokens for the task and that the Encoding Layer helps to utilize frame-level scores more precisely.


 DOI: 10.21437/Interspeech.2020-2111

Cite as: Choi, Y., Jung, Y., Kim, H. (2020) Deep MOS Predictor for Synthetic Speech Using Cluster-Based Modeling. Proc. Interspeech 2020, 1743-1747, DOI: 10.21437/Interspeech.2020-2111.


@inproceedings{Choi2020,
  author={Yeunju Choi and Youngmoon Jung and Hoirin Kim},
  title={{Deep MOS Predictor for Synthetic Speech Using Cluster-Based Modeling}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1743--1747},
  doi={10.21437/Interspeech.2020-2111},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2111}
}