End-to-End Audiovisual Fusion with LSTMs

Stavros Petridis, Yujiang Wang, Zuwei Li, Maja Pantic

Several end-to-end deep learning approaches have been recently presented which simultaneously extract visual features from the input images and perform visual speech classification. How- ever, research on jointly extracting audio and visual features and performing classification is very limited. In this work, we present an end-to-end audiovisual model based on Bidirectional Long Short-Term Memory (BLSTM) networks. To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the pixels and spectrograms and perform classification of speech and non- linguistic vocalisations. The model consists of multiple identical streams, one for each modality, which extract features directly from mouth regions and spectrograms. The temporal dynamics in each stream/modality are modeled by a BLSTM and the fusion of multiple streams/modalities takes place via another BLSTM. An absolute improvement of 1.9% in the mean F1 of 4 nonlingusitic vocalisations over audio-only classification is re- ported on the AVIC database. At the same time, the proposed end-to-end audiovisual fusion system improves the state-of-the- art performance on the AVIC database leading to a 9.7% absolute increase in the mean F1 measure. We also perform audiovisual speech recognition experiments on the OuluVS2 database using different views of the mouth, frontal to profile. The pro- posed audiovisual system significantly outperforms the audio- only model for all views when the acoustic noise is high.

 DOI: 10.21437/AVSP.2017-8

Cite as: Petridis, S., Wang, Y., Li, Z., Pantic, M. (2017) End-to-End Audiovisual Fusion with LSTMs. Proc. The 14th International Conference on Auditory-Visual Speech Processing, 36-40, DOI: 10.21437/AVSP.2017-8.

  author={Stavros Petridis and Yujiang Wang and Zuwei Li and Maja Pantic},
  title={ End-to-End Audiovisual Fusion with LSTMs},
  booktitle={Proc. The 14th International Conference on Auditory-Visual Speech Processing},