Auditory-Visual Speech Processing
The goal of our project is to collect the dataset of 3D facial motion parameters for the synthesis of talking head. However, the capture of human facial motion is usually an expensive task in some related researches, since special devices must be applied, such as optical or electronic trackers. In this paper, we propose a robust, accurate and inexpensive approach to estimate human facial motion from mirror-reflected videos. The approach takes advantages of the characteristics between original and mirrored image, and can be more robust than most of other general-purposed stereovision approach in the motion analysis for mirror-reflected videos. A preliminary dataset of facial motion parameters of MPEG-4 and French visemes and with voice data has been acquired, the estimated data are also applied to our facial animation system.
Bibliographic reference. Lin, I-Chen / Yeh, Jeng-Sheng / Ouhyoung, Ming (2001): "Extraction of 3D facial motion parameters from mirror-reflected multi-view video for audio-visual synthesis", In AVSP-2001, 66-71.