Auditory-Visual Speech Processing (AVSP) 2009
University of East Anglia, Norwich, UK
This paper deals with an analysis of lip shapes during speech that accompanies sign language, referred to as sign speech. A new sign speech database is collected and a new framework for the analysis of mouth patterns is introduced. Using a shape model restricted to the outer lip contour, we show that the articulatory parameters for visual speech alone are not sufficient for representing sign speech. The errors occur mainly for the mouth opening. A correction to the standard articulatory parameters and additional articulatory parameters are investigated to cover the observed mouth patterns and thus refine the synthesised sign speech.
Index Terms: visual speech synthesis, talking head, sign speech synthesis, articulatory parameters
Bibliographic reference. Krňoul, Zdeněk (2009): "Refinement of lip shape in sign speech synthesis", In AVSP-2009, 161-165.