Voice Conversion from Unaligned Corpora Using Variational Autoencoding Wasserstein Generative Adversarial Networks

Chin-Cheng Hsu, Hsin-Te Hwang, Yi-Chiao Wu, Yu Tsao, Hsin-Min Wang


Building a voice conversion (VC) system from non-parallel speech corpora is challenging but highly valuable in real application scenarios. In most situations, the source and the target speakers do not repeat the same texts or they may even speak different languages. In this case, one possible, although indirect, solution is to build a generative model for speech. Generative models focus on explaining the observations with latent variables instead of learning a pairwise transformation function, thereby bypassing the requirement of speech frame alignment. In this paper, we propose a non-parallel VC framework with a variational autoencoding Wasserstein generative adversarial network (VAW-GAN) that explicitly considers a VC objective when building the speech model. Experimental results corroborate the capability of our framework for building a VC system from unaligned data, and demonstrate improved conversion quality.


 DOI: 10.21437/Interspeech.2017-63

Cite as: Hsu, C., Hwang, H., Wu, Y., Tsao, Y., Wang, H. (2017) Voice Conversion from Unaligned Corpora Using Variational Autoencoding Wasserstein Generative Adversarial Networks. Proc. Interspeech 2017, 3364-3368, DOI: 10.21437/Interspeech.2017-63.


@inproceedings{Hsu2017,
  author={Chin-Cheng Hsu and Hsin-Te Hwang and Yi-Chiao Wu and Yu Tsao and Hsin-Min Wang},
  title={Voice Conversion from Unaligned Corpora Using Variational Autoencoding Wasserstein Generative Adversarial Networks},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3364--3368},
  doi={10.21437/Interspeech.2017-63},
  url={http://dx.doi.org/10.21437/Interspeech.2017-63}
}