Large-Scale Visual Speech Recognition

Brendan Shillingford, Yannis Assael, Matthew W. Hoffman, Thomas Paine, Cían Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, Marie Mulville, Misha Denil, Ben Coppin, Ben Laurie, Andrew Senior, Nando de Freitas

This work presents a scalable solution to continuous visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of transcriptions and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a phoneme-to-word speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on previous lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively.

 DOI: 10.21437/Interspeech.2019-1669

Cite as: Shillingford, B., Assael, Y., Hoffman, M.W., Paine, T., Hughes, C., Prabhu, U., Liao, H., Sak, H., Rao, K., Bennett, L., Mulville, M., Denil, M., Coppin, B., Laurie, B., Senior, A., Freitas, N.D. (2019) Large-Scale Visual Speech Recognition. Proc. Interspeech 2019, 4135-4139, DOI: 10.21437/Interspeech.2019-1669.

  author={Brendan Shillingford and Yannis Assael and Matthew W. Hoffman and Thomas Paine and Cían Hughes and Utsav Prabhu and Hank Liao and Hasim Sak and Kanishka Rao and Lorrayne Bennett and Marie Mulville and Misha Denil and Ben Coppin and Ben Laurie and Andrew Senior and Nando de Freitas},
  title={{Large-Scale Visual Speech Recognition}},
  booktitle={Proc. Interspeech 2019},