Unsupervised Learning for Sequence-to-Sequence Text-to-Speech for Low-Resource Languages

Haitong Zhang, Yue Lin


Recently, sequence-to-sequence models with attention have been successfully applied in Text-to-speech (TTS). These models can generate near-human speech with a large accurately-transcribed speech corpus. However, preparing such a large data-set is both expensive and laborious. To alleviate the problem of heavy data demand, we propose a novel unsupervised pre-training mechanism in this paper. Specifically, we first use Vector-quantization Variational-Autoencoder (VQ-VAE) to extract the unsupervised linguistic units from large-scale, publicly found, and untranscribed speech. We then pre-train the sequence-to-sequence TTS model by using the <unsupervised linguistic units, audio> pairs. Finally, we fine-tune the model with a small amount of <text, audio> paired data from the target speaker. As a result, both objective and subjective evaluations show that our proposed method can synthesize more intelligible and natural speech with the same amount of paired training data. Besides, we extend our proposed method to the hypothesized low-resource languages and verify the effectiveness of the method using objective evaluation.


 DOI: 10.21437/Interspeech.2020-1403

Cite as: Zhang, H., Lin, Y. (2020) Unsupervised Learning for Sequence-to-Sequence Text-to-Speech for Low-Resource Languages. Proc. Interspeech 2020, 3161-3165, DOI: 10.21437/Interspeech.2020-1403.


@inproceedings{Zhang2020,
  author={Haitong Zhang and Yue Lin},
  title={{Unsupervised Learning for Sequence-to-Sequence Text-to-Speech for Low-Resource Languages}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3161--3165},
  doi={10.21437/Interspeech.2020-1403},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1403}
}