Data Efficient Voice Cloning from Noisy Samples with Domain Adversarial Training

Jian Cong, Shan Yang, Lei Xie, Guoqiao Yu, Guanglu Wan


Data efficient voice cloning aims at synthesizing target speaker’s voice with only a few enrollment samples at hand. To this end, speaker adaptation and speaker encoding are two typical methods based on base model trained from multiple speakers. The former uses a small set of target speaker data to transfer the multi-speaker model to target speaker’s voice through direct model update, while in the latter, only a few seconds of target speaker’s audio directly goes through an extra speaker encoding model along with the multi-speaker model to synthesize target speaker’s voice without model update. Nevertheless, the two methods need clean target speaker data. However, the samples provided by user may inevitably contain acoustic noise in real applications. It’s still challenging to generating target voice with noisy data. In this paper, we study the data efficient voice cloning problem from noisy samples under the sequence-to-sequence based TTS paradigm. Specifically, we introduce domain adversarial training (DAT) to speaker adaptation and speaker encoding, which aims to disentangle noise from speech-noise mixture. Experiments show that for both speaker adaptation and encoding, the proposed approaches can consistently synthesize clean speech from noisy speaker samples, apparently outperforming the method adopting state-of-the-art speech enhancement module.


 DOI: 10.21437/Interspeech.2020-2530

Cite as: Cong, J., Yang, S., Xie, L., Yu, G., Wan, G. (2020) Data Efficient Voice Cloning from Noisy Samples with Domain Adversarial Training. Proc. Interspeech 2020, 811-815, DOI: 10.21437/Interspeech.2020-2530.


@inproceedings{Cong2020,
  author={Jian Cong and Shan Yang and Lei Xie and Guoqiao Yu and Guanglu Wan},
  title={{Data Efficient Voice Cloning from Noisy Samples with Domain Adversarial Training}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={811--815},
  doi={10.21437/Interspeech.2020-2530},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2530}
}