Incremental Text to Speech for Neural Sequence-to-Sequence Models Using Reinforcement Learning

Devang S. Ram Mohan, Raphael Lenain, Lorenzo Foglianti, Tian Huey Teh, Marlene Staib, Alexandra Torresquintero, Jiameng Gao


Modern approaches to text to speech require the entire input character sequence to be processed before any audio is synthesised. This latency limits the suitability of such models for time-sensitive tasks like simultaneous interpretation. Interleaving the action of reading a character with that of synthesising audio reduces this latency. However, the order of this sequence of interleaved actions varies across sentences, which raises the question of how the actions should be chosen. We propose a reinforcement learning based framework to train an agent to make this decision. We compare our performance against that of deterministic, rule-based systems. Our results demonstrate that our agent successfully balances the trade-off between the latency of audio generation and the quality of synthesised audio. More broadly, we show that neural sequence-to-sequence models can be adapted to run in an incremental manner.


 DOI: 10.21437/Interspeech.2020-1822

Cite as: Mohan, D.S.R., Lenain, R., Foglianti, L., Teh, T.H., Staib, M., Torresquintero, A., Gao, J. (2020) Incremental Text to Speech for Neural Sequence-to-Sequence Models Using Reinforcement Learning. Proc. Interspeech 2020, 3186-3190, DOI: 10.21437/Interspeech.2020-1822.


@inproceedings{Mohan2020,
  author={Devang S. Ram Mohan and Raphael Lenain and Lorenzo Foglianti and Tian Huey Teh and Marlene Staib and Alexandra Torresquintero and Jiameng Gao},
  title={{Incremental Text to Speech for Neural Sequence-to-Sequence Models Using Reinforcement Learning}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3186--3190},
  doi={10.21437/Interspeech.2020-1822},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1822}
}