INTERSPEECH 2006 - ICSLP
Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Generalization of the Minimum Classification Error (MCE) Training Based on Maximizing Generalized Posterior Probability (GPP)

Qiang Fu (1), Antonio Moreno-Daniel (1), Biing-Hwang Juang (1), Jian-Lai Zhou (2), Frank K. Soong (2)

(1) Georgia Institute of Technology, USA; (2) Microsoft Research Asia, China

In this paper, we generalize the training error definitions for minimum classification error (MCE) training and investigate their impact on recognition performance. Starting the conventional MCE method, we discuss with three issues in regard to training error definition, which may affect the recognizer performance and need to be extensively studied. We focus our discussions on the first two aspects in this paper. We re-visit the fact that the objective function in MCE training can be formulated into an equivalent form for maximizing the "posterior probability" of the corresponding training units. Based on the framework of the generalized posterior probability (GPP) [1], we design experiments to demonstrate effects about different training units and different constraints on segmentation boundaries for the MCE training. We also provide a performance analysis to illustrate our generalization for both phone recognition and word recognition tasks based on the wall street journal (WSJ0) [2, 3] database.

Full Paper

Bibliographic reference.  Fu, Qiang / Moreno-Daniel, Antonio / Juang, Biing-Hwang / Zhou, Jian-Lai / Soong, Frank K. (2006): "Generalization of the minimum classification error (MCE) training based on maximizing generalized posterior probability (GPP)", In INTERSPEECH-2006, paper 1780-Mon3CaP.12.