Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Within-Class Covariance Normalization for SVM-Based Speaker Recognition

Andrew O. Hatch (1,2), Sachin Kajarekar (3), Andreas Stolcke (1,3)

(1) International Computer Science Institute, USA; (2) The University of California at Berkeley, USA; (3) SRI International, USA

This paper extends the within-class covariance normalization (WCCN) technique described in [1, 2] for training generalized linear kernels. We describe a practical procedure for applying WCCN to an SVM-based speaker recognition system where the input feature vectors reside in a high-dimensional space. Our approach involves using principal component analysis (PCA) to split the original feature space into two subspaces: a low-dimensional "PCA space" and a high-dimensional "PCA-complement space." After performing WCCN in the PCA space, we concatenate the resulting feature vectors with a weighted version of their PCA-complements. When applied to a state-of-the-art MLLR-SVM speaker recognition system, this approach achieves improvements of up to 22% in EER and 28% in minimum decision cost function (DCF) over our previous baseline. We also achieve substantial improvements over an MLLR-SVM system that performs WCCN in the PCA space but discards the PCA-complement.

Full Paper

Bibliographic reference.  Hatch, Andrew O. / Kajarekar, Sachin / Stolcke, Andreas (2006): "Within-class covariance normalization for SVM-based speaker recognition", In INTERSPEECH-2006, paper 1874-Wed1A1O.5.