Recognition-Synthesis Based Non-Parallel Voice Conversion with Adversarial Learning

Jing-Xuan Zhang, Zhen-Hua Ling, Li-Rong Dai

This paper presents an adversarial learning method for recognition-synthesis based non-parallel voice conversion. A recognizer is used to transform acoustic features into linguistic representations while a synthesizer recovers output features from the recognizer outputs together with the speaker identity. By separating the speaker characteristics from the linguistic representations, voice conversion can be achieved by replacing the speaker identity with the target one. In our proposed method, a speaker adversarial loss is adopted in order to obtain speaker-independent linguistic representations using the recognizer. Furthermore, discriminators are introduced and a generative adversarial network (GAN) loss is used to prevent the predicted features from being over-smoothed. For training model parameters, a strategy of pre-training on a multi-speaker dataset and then fine-tuning on the source-target speaker pair is designed. Our method achieved higher similarity than the baseline model that obtained the best performance in Voice Conversion Challenge 2018.

 DOI: 10.21437/Interspeech.2020-0036

Cite as: Zhang, J., Ling, Z., Dai, L. (2020) Recognition-Synthesis Based Non-Parallel Voice Conversion with Adversarial Learning. Proc. Interspeech 2020, 771-775, DOI: 10.21437/Interspeech.2020-0036.

  author={Jing-Xuan Zhang and Zhen-Hua Ling and Li-Rong Dai},
  title={{Recognition-Synthesis Based Non-Parallel Voice Conversion with Adversarial Learning}},
  booktitle={Proc. Interspeech 2020},