Attention Forcing for Speech Synthesis

Qingyun Dou, Joshua Efiong, Mark J.F. Gales


Auto-regressive sequence-to-sequence models with attention mechanisms have achieved state-of-the-art performance in various tasks including speech synthesis. Training these models can be difficult. The standard approach guides a model with the reference output history during training. However during synthesis the generated output history must be used. This mismatch can impact performance. Several approaches have been proposed to handle this, normally by selectively using the generated output history. To make training stable, these approaches often require a heuristic schedule or an auxiliary classifier. This paper introduces attention forcing, which guides the model with the generated output history and reference attention. This approach reduces the training-evaluation mismatch without the need for a schedule or a classifier. Additionally, for standard training approaches, the frame rate is often reduced to prevent models from copying the output history. As attention forcing does not feed the reference output history to the model, it allows using a higher frame rate, which improves the speech quality. Finally, attention forcing allows the model to generate output sequences aligned with the references, which is important for some down-stream tasks such as training neural vocoders. Experiments show that attention forcing allows doubling the frame rate, and yields significant gain in speech quality.


 DOI: 10.21437/Interspeech.2020-2520

Cite as: Dou, Q., Efiong, J., Gales, M.J. (2020) Attention Forcing for Speech Synthesis. Proc. Interspeech 2020, 4014-4018, DOI: 10.21437/Interspeech.2020-2520.


@inproceedings{Dou2020,
  author={Qingyun Dou and Joshua Efiong and Mark J.F. Gales},
  title={{Attention Forcing for Speech Synthesis}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4014--4018},
  doi={10.21437/Interspeech.2020-2520},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2520}
}