Speech Prosody 2006

Dresden, Germany
May 2-5, 2006

Exploiting Glottal and Prosodic Information for Robust Speaker Verification

Yuan-Fu Liao (1), Zhi-Ren Zeng (1), Zi-He Chen (2), Yau-Tarng Juang (2)

(1) Department of Electronic Engineering, National Taipei University of Technology, Taipei, Taiwan
(2) Department of Electrical Engineering, National Central University, Chung-Li, Taoyuan, Taiwan

In this paper, three different levels of speaker cues including the glottal, prosodic and spectral information are integrated together to build a robust speaker verification system. The major purpose is to resist the distortion of channels and handsets. Especially, the dynamic behavior of normalized amplitude quotient (NAQ) and prosodic feature contours are modeled using Gaussian of mixture models (GMMs) and two latent prosody analyses (LPAs)-based approaches, respectively. The proposed methods are evaluated on the standard one speaker detection task of the 2001 NIST Speaker Recognition Evaluation Corpus where only one 2-minute training and 30-second trial speech (in average) are available. Experimental results have shown that the proposed approach could improve the equal error rates (EERs) of maximum a priori-adapted (MAP)-GMMs and GMMs+T-norm approaches from 12.4% and 9.5% to 10.3% and 8.3% and finally to 7.8%, respectively.

Full Paper

Bibliographic reference.  Liao, Yuan-Fu / Zeng, Zhi-Ren / Chen, Zi-He / Juang, Yau-Tarng (2006): "Exploiting glottal and prosodic information for robust speaker verification", In SP-2006, paper 238.