Reverberation Modeling for Source-Filter-Based Neural Vocoder

Yang Ai, Xin Wang, Junichi Yamagishi, Zhen-Hua Ling


This paper presents a reverberation module for source-filter-based neural vocoders that improves the performance of reverberant effect modeling. This module uses the output waveform of neural vocoders as an input and produces a reverberant waveform by convolving the input with a room impulse response (RIR). We propose two approaches to parameterizing and estimating the RIR. The first approach assumes a global time-invariant (GTI) RIR and directly learns the values of the RIR on a training dataset. The second approach assumes an utterance-level time-variant (UTV) RIR, which is invariant within one utterance but varies across utterances, and uses another neural network to predict the RIR values. We add the proposed reverberation module to the phase spectrum predictor (PSP) of a HiNet vocoder and jointly train the model. Experimental results demonstrate that the proposed module was helpful for modeling the reverberation effect and improving the perceived quality of generated reverberant speech. The UTV-RIR was shown to be more robust than the GTI-RIR to unknown reverberation conditions and achieved a perceptually better reverberation effect.


 DOI: 10.21437/Interspeech.2020-1613

Cite as: Ai, Y., Wang, X., Yamagishi, J., Ling, Z. (2020) Reverberation Modeling for Source-Filter-Based Neural Vocoder. Proc. Interspeech 2020, 3560-3564, DOI: 10.21437/Interspeech.2020-1613.


@inproceedings{Ai2020,
  author={Yang Ai and Xin Wang and Junichi Yamagishi and Zhen-Hua Ling},
  title={{Reverberation Modeling for Source-Filter-Based Neural Vocoder}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3560--3564},
  doi={10.21437/Interspeech.2020-1613},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1613}
}