Knowledge-and-Data-Driven Amplitude Spectrum Prediction for Hierarchical Neural Vocoders

Yang Ai, Zhen-Hua Ling


In our previous work, we have proposed a neural vocoder called HiNet which recovers speech waveforms by predicting amplitude and phase spectra hierarchically from input acoustic features. In HiNet, the amplitude spectrum predictor (ASP) predicts log amplitude spectra (LAS) from input acoustic features. This paper proposes a novel knowledge-and-data-driven ASP (KDD-ASP) to improve the conventional one. First, acoustic features (i.e., F0 and mel-cepstra) pass through a knowledge-driven LAS recovery module to obtain approximate LAS (ALAS). This module is designed based on the combination of STFT and source-filter theory, in which the source part and the filter part are designed based on input F0 and mel-cepstra, respectively. Then, the recovered ALAS are processed by a data-driven LAS refinement module which consists of multiple trainable convolutional layers to get the final LAS. Experimental results show that the HiNet vocoder using KDD-ASP can achieve higher quality of synthetic speech than that using conventional ASP and the WaveRNN vocoder on a text-to-speech (TTS) task.


 DOI: 10.21437/Interspeech.2020-1046

Cite as: Ai, Y., Ling, Z. (2020) Knowledge-and-Data-Driven Amplitude Spectrum Prediction for Hierarchical Neural Vocoders. Proc. Interspeech 2020, 190-194, DOI: 10.21437/Interspeech.2020-1046.


@inproceedings{Ai2020,
  author={Yang Ai and Zhen-Hua Ling},
  title={{Knowledge-and-Data-Driven Amplitude Spectrum Prediction for Hierarchical Neural Vocoders}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={190--194},
  doi={10.21437/Interspeech.2020-1046},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1046}
}