Audio Dequantization for High Fidelity Audio Generation in Flow-Based Neural Vocoder

Hyun-Wook Yoon, Sang-Hoon Lee, Hyeong-Rae Noh, Seong-Whan Lee


In recent works, a flow-based neural vocoder has shown significant improvement in real-time speech generation task. The sequence of invertible flow operations allows the model to convert samples from simple distribution to audio samples. However, training a continuous density model on discrete audio data can degrade model performance due to the topological difference between latent and actual distribution. To resolve this problem, we propose audio dequantization methods in flow-based neural vocoder for high fidelity audio generation. Data dequantization is a well-known method in image generation but has not yet been studied in the audio domain. For this reason, we implement various audio dequantization methods in flow-based neural vocoder and investigate the effect on the generated audio. We conduct various objective performance assessments and subjective evaluation to show that audio dequantization can improve audio generation quality. From our experiments, using audio dequantization produces waveform audio with better harmonic structure and fewer digital artifacts.


 DOI: 10.21437/Interspeech.2020-1226

Cite as: Yoon, H., Lee, S., Noh, H., Lee, S. (2020) Audio Dequantization for High Fidelity Audio Generation in Flow-Based Neural Vocoder. Proc. Interspeech 2020, 3545-3549, DOI: 10.21437/Interspeech.2020-1226.


@inproceedings{Yoon2020,
  author={Hyun-Wook Yoon and Sang-Hoon Lee and Hyeong-Rae Noh and Seong-Whan Lee},
  title={{Audio Dequantization for High Fidelity Audio Generation in Flow-Based Neural Vocoder}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3545--3549},
  doi={10.21437/Interspeech.2020-1226},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1226}
}