International Conference on Auditory-Visual Speech Processing 2008

Tangalooma Wild Dolphin Resort, Moreton Island, Queensland, Australia
September 26-29, 2008

Static and Dynamic Lip Feature Analysis for Speaker Verification

S. L. Wang (1), Alan Wee-Chung Liew (2)

(1) School of Info. Security Engg., Shanghai Jiaotong University, Shanghai, China
(2) School of Info. and Comm. Technology, Griffith University, Brisbane, Australia

As we all know, various speakers have their own talking styles. Hence, lip shape and its movement can be used as a new biometrics and infer the speakerís identity. Compared with the traditional biometrics such as human face and fingerprint, person verification based on the lip feature has the advantage of containing both static and dynamic information. Many researchers have demonstrated that incorporating dynamic information such as lip movement help improve the verification performance. However, which is more discriminative, the static features or the dynamic features remained unsolved. In this paper, the discriminative power analysis of the static and dynamic lip features is performed. For the static lip features, a new kind of feature representation including the geometric features, contour descriptors and texture features is proposed and the Gaussian Mixture Model (GMM) is employed as the classifier. For the dynamic features, Hidden Markov Model (HMM) is employed as the classifier for its superiority in dealing with time-series data. Experiments are carried out on a database containing 40 speakers in our lab. Detailed evaluation for various static/dynamic lip feature representation is made along with a corresponding discussion on the discriminative ability. The experimental results disclose that the dynamic lip shape information and the static lip texture information contain much identity-relevant information.

Full Paper

Bibliographic reference.  Wang, S. L. / Liew, Alan Wee-Chung (2008): "Static and dynamic lip feature analysis for speaker verification", In AVSP-2008, 223-227.