DurIAN-SC: Duration Informed Attention Network Based Singing Voice Conversion System

Liqiang Zhang, Chengzhu Yu, Heng Lu, Chao Weng, Chunlei Zhang, Yusong Wu, Xiang Xie, Zijin Li, Dong Yu


Singing voice conversion is converting the timbre in the source singing to the target speaker’s voice while keeping singing content the same. However, singing data for target speaker is much more difficult to collect compared with normal speech data. In this paper, we introduce a singing voice conversion algorithm that is capable of generating high quality target speaker’s singing using only his/her normal speech data. First, we manage to integrate the training and conversion process of speech and singing into one framework by unifying the features used in standard speech synthesis system and singing synthesis system. In this way, normal speech data can also contribute to singing voice conversion training, making the singing voice conversion system more robust especially when the singing database is small. Moreover, in order to achieve one-shot singing voice conversion, a speaker embedding module is developed using both speech and singing data, which provides target speaker identify information during conversion. Experiments indicate proposed sing conversion system can convert source singing to target speaker’s high-quality singing with only 20 seconds of target speaker’s enrollment speech data.


 DOI: 10.21437/Interspeech.2020-1789

Cite as: Zhang, L., Yu, C., Lu, H., Weng, C., Zhang, C., Wu, Y., Xie, X., Li, Z., Yu, D. (2020) DurIAN-SC: Duration Informed Attention Network Based Singing Voice Conversion System. Proc. Interspeech 2020, 1231-1235, DOI: 10.21437/Interspeech.2020-1789.


@inproceedings{Zhang2020,
  author={Liqiang Zhang and Chengzhu Yu and Heng Lu and Chao Weng and Chunlei Zhang and Yusong Wu and Xiang Xie and Zijin Li and Dong Yu},
  title={{DurIAN-SC: Duration Informed Attention Network Based Singing Voice Conversion System}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1231--1235},
  doi={10.21437/Interspeech.2020-1789},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1789}
}