Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding

Seungwoo Choi, Seungju Han, Dongyoung Kim, Sungjoo Ha


On account of growing demands for personalization, the need for a so-called few-shot TTS system that clones speakers with only a few data is emerging. To address this issue, we propose Attentron, a few-shot TTS model that clones voices of speakers unseen during training. It introduces two special encoders, each serving different purposes. A fine-grained encoder extracts variable-length style information via an attention mechanism, and a coarse-grained encoder greatly stabilizes the speech synthesis, circumventing unintelligible gibberish even for synthesizing speech of unseen speakers. In addition, the model can scale out to an arbitrary number of reference audios to improve the quality of the synthesized speech. According to our experiments, including a human evaluation, the proposed model significantly outperforms state-of-the-art models when generating speech for unseen speakers in terms of speaker similarity and quality.


 DOI: 10.21437/Interspeech.2020-2096

Cite as: Choi, S., Han, S., Kim, D., Ha, S. (2020) Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding. Proc. Interspeech 2020, 2007-2011, DOI: 10.21437/Interspeech.2020-2096.


@inproceedings{Choi2020,
  author={Seungwoo Choi and Seungju Han and Dongyoung Kim and Sungjoo Ha},
  title={{Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2007--2011},
  doi={10.21437/Interspeech.2020-2096},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2096}
}