SpeedySpeech: Efficient Neural Speech Synthesis

Jan Vainer, Ondřej Dušek


While recent neural sequence-to-sequence models have greatly improved the quality of speech synthesis, there has not been a system capable of fast training, fast inference and high-quality audio synthesis at the same time. We propose a student-teacher network capable of high-quality faster-than-real-time spectrogram synthesis, with low requirements on computational resources and fast training time. We show that self-attention layers are not necessary for generation of high quality audio. We utilize simple convolutional blocks with residual connections in both student and teacher networks and use only a single attention layer in the teacher model. Coupled with a MelGAN vocoder, our model’s voice quality was rated significantly higher than Tacotron 2. Our model can be efficiently trained on a single GPU and can run in real time even on a CPU. We provide both our source code and audio samples in our GitHub repository.1


 DOI: 10.21437/Interspeech.2020-2867

Cite as: Vainer, J., Dušek, O. (2020) SpeedySpeech: Efficient Neural Speech Synthesis. Proc. Interspeech 2020, 3575-3579, DOI: 10.21437/Interspeech.2020-2867.


@inproceedings{Vainer2020,
  author={Jan Vainer and Ondřej Dušek},
  title={{SpeedySpeech: Efficient Neural Speech Synthesis}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3575--3579},
  doi={10.21437/Interspeech.2020-2867},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2867}
}