Singing Synthesis: With a Little Help from my Attention

Orazio Angelini, Alexis Moinet, Kayoko Yanagisawa, Thomas Drugman

We present UTACO, a singing synthesis model based on an attention-based sequence-to-sequence mechanism and a vocoder based on dilated causal convolutions. These two classes of models have significantly affected the field of text-to-speech, but have never been thoroughly applied to the task of singing synthesis. UTACO demonstrates that attention can be successfully applied to the singing synthesis field and improves naturalness over the state of the art. The system requires considerably less explicit modelling of voice features such as F0 patterns, vibratos, and note and phoneme durations, than previous models in the literature. Despite this, it shows a strong improvement in naturalness with respect to previous neural singing synthesis models. The model does not require any durations or pitch patterns as inputs, and learns to insert vibrato autonomously according to the musical context. However, we observe that, by completely dispensing with any explicit duration modelling it becomes harder to obtain the fine control of timing needed to exactly match the tempo of a song.

 DOI: 10.21437/Interspeech.2020-1399

Cite as: Angelini, O., Moinet, A., Yanagisawa, K., Drugman, T. (2020) Singing Synthesis: With a Little Help from my Attention. Proc. Interspeech 2020, 1221-1225, DOI: 10.21437/Interspeech.2020-1399.

  author={Orazio Angelini and Alexis Moinet and Kayoko Yanagisawa and Thomas Drugman},
  title={{Singing Synthesis: With a Little Help from my Attention}},
  booktitle={Proc. Interspeech 2020},