From Speaker Verification to Multispeaker Speech Synthesis, Deep Transfer with Feedback Constraint

Zexin Cai, Chuxiong Zhang, Ming Li


High-fidelity speech can be synthesized by end-to-end text-to-speech models in recent years. However, accessing and controlling speech attributes such as speaker identity, prosody, and emotion in a text-to-speech system remains a challenge. This paper presents a system involving feedback constraints for multispeaker speech synthesis. We manage to enhance the knowledge transfer from the speaker verification to the speech synthesis by engaging the speaker verification network. The constraint is taken by an added loss related to the speaker identity, which is centralized to improve the speaker similarity between the synthesized speech and its natural reference audio. The model is trained and evaluated on publicly available datasets. Experimental results, including visualization on speaker embedding space, show significant improvement in terms of speaker identity cloning in the spectrogram level. In addition, synthesized samples are available online for listening.1


 DOI: 10.21437/Interspeech.2020-1032

Cite as: Cai, Z., Zhang, C., Li, M. (2020) From Speaker Verification to Multispeaker Speech Synthesis, Deep Transfer with Feedback Constraint. Proc. Interspeech 2020, 3974-3978, DOI: 10.21437/Interspeech.2020-1032.


@inproceedings{Cai2020,
  author={Zexin Cai and Chuxiong Zhang and Ming Li},
  title={{From Speaker Verification to Multispeaker Speech Synthesis, Deep Transfer with Feedback Constraint}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3974--3978},
  doi={10.21437/Interspeech.2020-1032},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1032}
}