Semi-Supervised Learning for Multi-Speaker Text-to-Speech Synthesis Using Discrete Speech Representation

Tao Tu, Yuan-Jui Chen, Alexander H. Liu, Hung-yi Lee


Recently, end-to-end multi-speaker text-to-speech (TTS) systems gain success in the situation where a lot of high-quality speech plus their corresponding transcriptions are available. However, laborious paired data collection processes prevent many institutes from building multi-speaker TTS systems of great performance. In this work, we propose a semi-supervised learning approach for multi-speaker TTS. A multi-speaker TTS model can learn from the untranscribed audio via the proposed encoder-decoder framework with discrete speech representation. The experiment results demonstrate that with only an hour of paired speech data, whether the paired data is from multiple speakers or a single speaker, the proposed model can generate intelligible speech in different voices. We found the model can benefit from the proposed semi-supervised learning approach even when part of the unpaired speech data is noisy. In addition, our analysis reveals that different speaker characteristics of the paired data have an impact on the effectiveness of semi-supervised TTS.


 DOI: 10.21437/Interspeech.2020-1824

Cite as: Tu, T., Chen, Y., Liu, A.H., Lee, H. (2020) Semi-Supervised Learning for Multi-Speaker Text-to-Speech Synthesis Using Discrete Speech Representation. Proc. Interspeech 2020, 3191-3195, DOI: 10.21437/Interspeech.2020-1824.


@inproceedings{Tu2020,
  author={Tao Tu and Yuan-Jui Chen and Alexander H. Liu and Hung-yi Lee},
  title={{Semi-Supervised Learning for Multi-Speaker Text-to-Speech Synthesis Using Discrete Speech Representation}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3191--3195},
  doi={10.21437/Interspeech.2020-1824},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1824}
}