Can Speaker Augmentation Improve Multi-Speaker End-to-End TTS?

Erica Cooper, Cheng-I Lai, Yusuke Yasuda, Junichi Yamagishi


Previous work on speaker adaptation for end-to-end speech synthesis still falls short in speaker similarity. We investigate an orthogonal approach to the current speaker adaptation paradigms, speaker augmentation, by creating artificial speakers and by taking advantage of low-quality data. The base Tacotron2 model is modified to account for the channel and dialect factors inherent in these corpora. In addition, we describe a warm-start training strategy that we adopted for Tacotron2 training. A large-scale listening test is conducted, and a distance metric is adopted to evaluate synthesis of dialects. This is followed by an analysis on synthesis quality, speaker and dialect similarity, and a remark on the effectiveness of our speaker augmentation approach. Audio samples are available online1.


 DOI: 10.21437/Interspeech.2020-1229

Cite as: Cooper, E., Lai, C., Yasuda, Y., Yamagishi, J. (2020) Can Speaker Augmentation Improve Multi-Speaker End-to-End TTS?. Proc. Interspeech 2020, 3979-3983, DOI: 10.21437/Interspeech.2020-1229.


@inproceedings{Cooper2020,
  author={Erica Cooper and Cheng-I Lai and Yusuke Yasuda and Junichi Yamagishi},
  title={{Can Speaker Augmentation Improve Multi-Speaker End-to-End TTS?}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3979--3983},
  doi={10.21437/Interspeech.2020-1229},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1229}
}