Multi-Reference Neural TTS Stylization with Adversarial Cycle Consistency

Matt Whitehill, Shuang Ma, Daniel McDuff, Yale Song

Current multi-reference style transfer models for Text-to-Speech (TTS) perform sub-optimally on disjoints datasets, where one dataset contains only a single style class for one of the style dimensions. These models generally fail to produce style transfer for the dimension that is underrepresented in the dataset. In this paper, we propose an adversarial cycle consistency training scheme with paired and unpaired triplets to ensure the use of information from all style dimensions. During training, we incorporate unpaired triplets with randomly selected reference audio samples and encourage the synthesized speech to preserve the appropriate styles using adversarial cycle consistency. We use this method to transfer emotion from a dataset containing four emotions to a dataset with only a single emotion. This results in a 78% improvement in style transfer (based on emotion classification) with minimal reduction in fidelity and naturalness. In subjective evaluations our method was consistently rated as closer to the reference style than the baseline. Synthesized speech samples are available at:

 DOI: 10.21437/Interspeech.2020-2985

Cite as: Whitehill, M., Ma, S., McDuff, D., Song, Y. (2020) Multi-Reference Neural TTS Stylization with Adversarial Cycle Consistency. Proc. Interspeech 2020, 4442-4446, DOI: 10.21437/Interspeech.2020-2985.

  author={Matt Whitehill and Shuang Ma and Daniel McDuff and Yale Song},
  title={{Multi-Reference Neural TTS Stylization with Adversarial Cycle Consistency}},
  booktitle={Proc. Interspeech 2020},