Should we Hard-Code the Recurrence Concept or Learn it Instead ? Exploring the Transformer Architecture for Audio-Visual Speech Recognition

George Sterpu, Christian Saam, Naomi Harte


The audio-visual speech fusion strategy AV Align has shown significant performance improvements in audio-visual speech recognition (AVSR) on the challenging LRS2 dataset. Performance improvements range between 7% and 30% depending on the noise level when leveraging the visual modality of speech in addition to the auditory one. This work presents a variant of AV Align where the recurrent Long Short-term Memory (LSTM) computation block is replaced by the more recently proposed Transformer block. We compare the two methods, discussing in greater detail their strengths and weaknesses. We find that Transformers also learn cross-modal monotonic alignments, but suffer from the same visual convergence problems as the LSTM model, calling for a deeper investigation into the dominant modality problem in machine learning.


 DOI: 10.21437/Interspeech.2020-2480

Cite as: Sterpu, G., Saam, C., Harte, N. (2020) Should we Hard-Code the Recurrence Concept or Learn it Instead ? Exploring the Transformer Architecture for Audio-Visual Speech Recognition. Proc. Interspeech 2020, 3506-3509, DOI: 10.21437/Interspeech.2020-2480.


@inproceedings{Sterpu2020,
  author={George Sterpu and Christian Saam and Naomi Harte},
  title={{Should we Hard-Code the Recurrence Concept or Learn it Instead ? Exploring the Transformer Architecture for Audio-Visual Speech Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3506--3509},
  doi={10.21437/Interspeech.2020-2480},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2480}
}