VocGAN: A High-Fidelity Real-Time Vocoder with a Hierarchically-Nested Adversarial Network

Jinhyeok Yang, Junmo Lee, Youngik Kim, Hoon-Young Cho, Injung Kim


We present a novel high-fidelity real-time neural vocoder called VocGAN. A recently developed GAN-based vocoder, MelGAN, produces speech waveforms in real-time. However, it often produces a waveform that is insufficient in quality or inconsistent with acoustic characteristics of the input mel spectrogram. VocGAN is nearly as fast as MelGAN, but it significantly improves the quality and consistency of the output waveform. VocGAN applies a multi-scale waveform generator and a hierarchically-nested discriminator to learn multiple levels of acoustic properties in a balanced way. It also applies the joint conditional and unconditional objective, which has shown successful results in high-resolution image synthesis. In experiments, VocGAN synthesizes speech waveforms 416.7× faster on a GTX 1080Ti GPU and 3.24× faster on a CPU than real-time. Compared with MelGAN, it also exhibits significantly improved quality in multiple evaluation metrics including mean opinion score (MOS) with minimal additional overhead. Additionally, compared with Parallel WaveGAN, another recently developed high-fidelity vocoder, VocGAN is 6.98× faster on a CPU and exhibits higher MOS.


 DOI: 10.21437/Interspeech.2020-1238

Cite as: Yang, J., Lee, J., Kim, Y., Cho, H., Kim, I. (2020) VocGAN: A High-Fidelity Real-Time Vocoder with a Hierarchically-Nested Adversarial Network. Proc. Interspeech 2020, 200-204, DOI: 10.21437/Interspeech.2020-1238.


@inproceedings{Yang2020,
  author={Jinhyeok Yang and Junmo Lee and Youngik Kim and Hoon-Young Cho and Injung Kim},
  title={{VocGAN: A High-Fidelity Real-Time Vocoder with a Hierarchically-Nested Adversarial Network}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={200--204},
  doi={10.21437/Interspeech.2020-1238},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1238}
}