ESCA Workshop on Audio-Visual Speech Processing (AVSP'97)
September 26-27, 1997
This paper presents the application of a specialized language for the task of lip modeling. Models presented here are switching their state during evaluation. State changes are based on a highly accurate neural network decision. Different model states include optional parts for the inner mouth region. Using the contour description and a neural coded color profile, encouraging performance figures have been achieved.
Bibliographic reference. Vogt, Michael (1997): "Interpreted multi-state lip models for audio-visual speech recognition", In AVSP-1997, 125-128.