Resource-Adaptive Deep Learning for Visual Speech Recognition

Alexandros Koumparoulis, Gerasimos Potamianos, Samuel Thomas, Edmilson da Silva Morais


We focus on the problem of efficient architectures for lipreading that allow trading-off computational resources for visual speech recognition accuracy. In particular, we make two contributions: First, we introduce MobiLipNetV3, an efficient and accurate lipreading model, based on our earlier work on MobiLipNetV2 and incorporating recent advances in convolutional neural network architectures. Second, we propose a novel recognition paradigm, called MultiRate Ensemble (MRE), that combines a “lean” and a “full” MobiLipNetV3 in the lipreading pipeline, with the latter applied at a lower frame rate. This architecture yields a family of systems offering multiple accuracy vs. efficiency operating points depending on the frame-rate decimation of the “full” model, thus allowing adaptation to the available device resources. We evaluate our approach on the TCD-TIMIT corpus, popular in speaker-independent lipreading of continuous speech. The proposed MRE family of systems can be up to 73 times more efficient compared to residual neural network based lipreading, and up to twice as MobiLipNetV2, while in both cases reaching up to 8% absolute WER reduction, depending on the MRE chosen operating point. For example, a temporal decimation of three yields a 7% absolute WER reduction and a 26% relative decrease in computations over MobiLipNetV2.


 DOI: 10.21437/Interspeech.2020-3003

Cite as: Koumparoulis, A., Potamianos, G., Thomas, S., Morais, E.D.S. (2020) Resource-Adaptive Deep Learning for Visual Speech Recognition. Proc. Interspeech 2020, 3510-3514, DOI: 10.21437/Interspeech.2020-3003.


@inproceedings{Koumparoulis2020,
  author={Alexandros Koumparoulis and Gerasimos Potamianos and Samuel Thomas and Edmilson da Silva Morais},
  title={{Resource-Adaptive Deep Learning for Visual Speech Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3510--3514},
  doi={10.21437/Interspeech.2020-3003},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3003}
}