Adversarially Trained Multi-Singer Sequence-to-Sequence Singing Synthesizer

Jie Wu, Jian Luan


This paper presents a high quality singing synthesizer that is able to model a voice with limited available recordings. Based on the sequence-to-sequence singing model, we design a multi-singer framework to leverage all the existing singing data of different singers. To attenuate the issue of musical score unbalance among singers, we incorporate an adversarial task of singer classification to make encoder output less singer dependent. Furthermore, we apply multiple random window discriminators (MRWDs) on the generated acoustic features to make the network be a GAN. Both objective and subjective evaluations indicate that the proposed synthesizer can generate higher quality singing voice than baseline (4.12 vs 3.53 in MOS). Especially, the articulation of high-pitched vowels is significantly enhanced.


 DOI: 10.21437/Interspeech.2020-1109

Cite as: Wu, J., Luan, J. (2020) Adversarially Trained Multi-Singer Sequence-to-Sequence Singing Synthesizer. Proc. Interspeech 2020, 1296-1300, DOI: 10.21437/Interspeech.2020-1109.


@inproceedings{Wu2020,
  author={Jie Wu and Jian Luan},
  title={{Adversarially Trained Multi-Singer Sequence-to-Sequence Singing Synthesizer}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1296--1300},
  doi={10.21437/Interspeech.2020-1109},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1109}
}