Vocoder-Based Speech Synthesis from Silent Videos

Daniel Michelsanti, Olga Slizovskaia, Gloria Haro, Emilia Gómez, Zheng-Hua Tan, Jesper Jensen


Both acoustic and visual information influence human perception of speech. For this reason, the lack of audio in a video sequence determines an extremely low speech intelligibility for untrained lip readers. In this paper, we present a way to synthesise speech from the silent video of a talker using deep learning. The system learns a mapping function from raw video frames to acoustic features and reconstructs the speech with a vocoder synthesis algorithm. To improve speech reconstruction performance, our model is also trained to predict text information in a multi-task learning fashion and it is able to simultaneously reconstruct and recognise speech in real time. The results in terms of estimated speech quality and intelligibility show the effectiveness of our method, which exhibits an improvement over existing video-to-speech approaches.


 DOI: 10.21437/Interspeech.2020-1026

Cite as: Michelsanti, D., Slizovskaia, O., Haro, G., Gómez, E., Tan, Z., Jensen, J. (2020) Vocoder-Based Speech Synthesis from Silent Videos. Proc. Interspeech 2020, 3530-3534, DOI: 10.21437/Interspeech.2020-1026.


@inproceedings{Michelsanti2020,
  author={Daniel Michelsanti and Olga Slizovskaia and Gloria Haro and Emilia Gómez and Zheng-Hua Tan and Jesper Jensen},
  title={{Vocoder-Based Speech Synthesis from Silent Videos}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3530--3534},
  doi={10.21437/Interspeech.2020-1026},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1026}
}