GELP: GAN-Excited Linear Prediction for Speech Synthesis from Mel-Spectrogram

Lauri Juvela, Bajibabu Bollepalli, Junichi Yamagishi, Paavo Alku

Recent advances in neural network -based text-to-speech have reached human level naturalness in synthetic speech. The present sequence-to-sequence models can directly map text to mel-spectrogram acoustic features, which are convenient for modeling, but present additional challenges for vocoding (i.e., waveform generation from the acoustic features). High-quality synthesis can be achieved with neural vocoders, such as WaveNet, but such autoregressive models suffer from slow sequential inference. Meanwhile, their existing parallel inference counterparts are difficult to train and require increasingly large model sizes. In this paper, we propose an alternative training strategy for a parallel neural vocoder utilizing generative adversarial networks, and integrate a linear predictive synthesis filter into the model. Results show that the proposed model achieves significant improvement in inference speed, while outperforming a WaveNet in copy-synthesis quality.

 DOI: 10.21437/Interspeech.2019-2008

Cite as: Juvela, L., Bollepalli, B., Yamagishi, J., Alku, P. (2019) GELP: GAN-Excited Linear Prediction for Speech Synthesis from Mel-Spectrogram. Proc. Interspeech 2019, 694-698, DOI: 10.21437/Interspeech.2019-2008.

  author={Lauri Juvela and Bajibabu Bollepalli and Junichi Yamagishi and Paavo Alku},
  title={{GELP: GAN-Excited Linear Prediction for Speech Synthesis from Mel-Spectrogram}},
  booktitle={Proc. Interspeech 2019},