Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion Without Parallel Data

Seung-won Park, Doo-young Kim, Myun-chul Joe


We propose Cotatron, a transcription-guided speech encoder for speaker-independent linguistic representation. Cotatron is based on the multispeaker TTS architecture and can be trained with conventional TTS datasets. We train a voice conversion system to reconstruct speech with Cotatron features, which is similar to the previous methods based on Phonetic Posteriorgram (PPG). By training and evaluating our system with 108 speakers from the VCTK dataset, we outperform the previous method in terms of both naturalness and speaker similarity. Our system can also convert speech from speakers that are unseen during training, and utilize ASR to automate the transcription with minimal reduction of the performance. Audio samples are available at https://mindslab-ai.github.io/cotatron, and the code with a pre-trained model will be made available soon.


 DOI: 10.21437/Interspeech.2020-1542

Cite as: Park, S., Kim, D., Joe, M. (2020) Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion Without Parallel Data. Proc. Interspeech 2020, 4696-4700, DOI: 10.21437/Interspeech.2020-1542.


@inproceedings{Park2020,
  author={Seung-won Park and Doo-young Kim and Myun-chul Joe},
  title={{Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion Without Parallel Data}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4696--4700},
  doi={10.21437/Interspeech.2020-1542},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1542}
}