CopyCat: Many-to-Many Fine-Grained Prosody Transfer for Neural Text-to-Speech

Sri Karlapati, Alexis Moinet, Arnaud Joly, Viacheslav Klimkov, Daniel Sáez-Trigueros, Thomas Drugman

Prosody Transfer (PT) is a technique that aims to use the prosody from a source audio as a reference while synthesising speech. Fine-grained PT aims at capturing prosodic aspects like rhythm, emphasis, melody, duration, and loudness, from a source audio at a very granular level and transferring them when synthesising speech in a different target speaker’s voice. Current approaches for fine-grained PT suffer from source speaker leakage, where the synthesised speech has the voice identity of the source speaker as opposed to the target speaker. In order to mitigate this issue, they compromise on the quality of PT. In this paper, we propose CopyCat, a novel, many-to-many PT system that is robust to source speaker leakage, without using parallel data. We achieve this through a novel reference encoder architecture capable of capturing temporal prosodic representations which are robust to source speaker leakage. We compare CopyCat against a state-of-the-art fine-grained PT model through various subjective evaluations, where we show a relative improvement of 47% in the quality of prosody transfer and 14% in preserving the target speaker identity, while still maintaining the same naturalness.

 DOI: 10.21437/Interspeech.2020-1251

Cite as: Karlapati, S., Moinet, A., Joly, A., Klimkov, V., Sáez-Trigueros, D., Drugman, T. (2020) CopyCat: Many-to-Many Fine-Grained Prosody Transfer for Neural Text-to-Speech. Proc. Interspeech 2020, 4387-4391, DOI: 10.21437/Interspeech.2020-1251.

  author={Sri Karlapati and Alexis Moinet and Arnaud Joly and Viacheslav Klimkov and Daniel Sáez-Trigueros and Thomas Drugman},
  title={{CopyCat: Many-to-Many Fine-Grained Prosody Transfer for Neural Text-to-Speech}},
  booktitle={Proc. Interspeech 2020},