Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Minimum Divergence Based Discriminative Training

Jun Du (1), Peng Liu (2), Frank K. Soong (2), Jian-Lai Zhou (2), Ren-Hua Wang1

(1) University of Science & Technology of China, China; (2) Microsoft Research Asia, China

We propose to use Minimum Divergence (MD) as a new measure of errors in discriminative training. To focus on improving discrimination between any two given acoustic models, we refine the error definition in terms of Kullback-Leibler Divergence (KLD) between them. The new measure can be regarded as a modified version of Minimum Phone Error (MPE) but with a higher resolution than just a symbol matching based criterion. Experimental recognition results show the new MD based training yields relative word error rate reductions of 57.8% and 6.1% on TIDigits and Switchboard databases, respectively, in comparing with the ML trained baseline systems. The recognition performance of MD is also shown to be consistently better than that of MPE.

Full Paper

Bibliographic reference.  Du, Jun / Liu, Peng / Soong, Frank K. / Zhou, Jian-Lai / Wang1, Ren-Hua (2006): "Minimum divergence based discriminative training", In INTERSPEECH-2006, paper 1703-Thu2A1O.2.