Auditory-Visual Speech Processing (AVSP) 2010
Hakone, Kanagawa, Japan
When creating realistic talking head animations, accurate modeling of speech articulators is important for speech perceptibility. Previous lip modeling methods such as simple numerical lip modeling focus on creating a general lip model without incorporating lip speaker variations. Here we present a method for creating accurate speaker-specific lip representations that retain the individual characteristics of a speakers lips via an adaptive numerical approach using 3D scanned surface and MRI data. By adjusting spline parameters automatically to minimize the error between node points of the lip model and the raw 3D surface, new 3D lips are created efficiently and easily. The resulting lip models will be used in our talking head animation system to evaluate auditory-visual speech perception, and to analyze our 3D face database for statistically relevant lip features.
Bibliographic reference. Kuratate, Takaaki / Riley, Marcia (2010): "Building speaker-specific lip models for talking heads from 3d face data", In AVSP-2010, paper P9.