Towards Universal Text-to-Speech

Jingzhou Yang, Lei He


This paper studies a multilingual sequence-to-sequence text-to-speech framework towards universal modeling, that is able to synthesize speech for any speaker in any language using a single model. This framework consists of a transformer-based acoustic predictor and a WaveNet neural vocoder, with global conditions from speaker and language networks. It is examined on a massive TTS data set with around 1250 hours of data from 50 language locales, and the amount of data in different locales is highly unbalanced. Although the multilingual model exhibits the transfer learning ability to benefit the low-resource languages, data imbalance still undermines the model performance. A data balance training strategy is successfully applied and effectively improves the voice quality of the low-resource languages. Furthermore, this paper examines the modeling capacity of extending to new speakers and languages, as a key step towards universal modeling. Experiments show 20 seconds of data is feasible for a new speaker and 6 minutes for a new language.


 DOI: 10.21437/Interspeech.2020-1590

Cite as: Yang, J., He, L. (2020) Towards Universal Text-to-Speech. Proc. Interspeech 2020, 3171-3175, DOI: 10.21437/Interspeech.2020-1590.


@inproceedings{Yang2020,
  author={Jingzhou Yang and Lei He},
  title={{Towards Universal Text-to-Speech}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3171--3175},
  doi={10.21437/Interspeech.2020-1590},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1590}
}