Combining Multiple Views for Visual Speech Recognition

Marina Zimmermann, Mostafa Mehdipour Ghazi, Hazim Kemal Ekenel, Jean-Philippe Thiran


Visual speech recognition is a challenging research problem with a particular practical application of aiding audio speech recognition in noisy scenarios. Multiple camera setups can be beneficial for the visual speech recognition systems in terms of improved performance and robustness. In this paper, we explore this aspect and provide a comprehensive study on combining multiple views for visual speech recognition. The thorough analysis covers fusion of all possible view angle combinations both at feature level and decision level. The employed visual speech recognition system in this study extracts features through a PCA-based convolutional neural network, followed by an LSTM network. Finally, these features are processed in a tandem system, being fed into a GMM-HMM scheme. The decision fusion acts after this point by combining the Viterbi path log-likelihoods. The results show that the complementary information contained in recordings from different view angles improves the results significantly. For example, the sentence correctness on the test set is increased from 76% for the highest performing single view (30◦) to up to 83% when combining this view with the frontal and 60◦ view angles.


 DOI: 10.21437/AVSP.2017-10

Cite as: Zimmermann, M., Ghazi, M.M., Ekenel, H.K., Thiran, J. (2017) Combining Multiple Views for Visual Speech Recognition. Proc. The 14th International Conference on Auditory-Visual Speech Processing, 47-52, DOI: 10.21437/AVSP.2017-10.


@inproceedings{Zimmermann2017,
  author={Marina Zimmermann and Mostafa Mehdipour Ghazi and Hazim Kemal Ekenel and Jean-Philippe Thiran},
  title={ Combining Multiple Views for Visual Speech Recognition},
  year=2017,
  booktitle={Proc. The 14th International Conference on Auditory-Visual Speech Processing},
  pages={47--52},
  doi={10.21437/AVSP.2017-10},
  url={http://dx.doi.org/10.21437/AVSP.2017-10}
}