INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Lightly Supervised Training for Risk-Based Discriminative Language Models

Akio Kobayashi, Takahiro Oku, Yuya Fujita, Shoei Sato

NHK, Japan

We propose a lightly supervised training method for a discriminative language model (DLM) based on risk minimization criteria. In lightly supervised training, pseudo labels generated by automatic speech recognition (ASR) are used as references. However, as these labels usually include recognition errors, the discriminative models estimated from such faulty reference labels may degrade ASR performance. Therefore, an approach to prevent performance degradation is necessary for discriminative language modeling. In our proposed lightly supervised training, the DLM is estimated from a "fused" risk, which is a relaxed version of the conventional Bayes risk. The fused risk is computed in a supervised manner when pseudo labels are accepted as references with high confidence while computed in an unsupervised manner when the labels are rejected due to low confidence. Accordingly, minimizing the fused risk for the training lattices results in a DLM with smoothed model parameters. The experimental results show that our proposed lightly supervised training method significantly reduced the word error rate compared with DLMs trained in conventional lightly supervised manners.

Full Paper

Bibliographic reference.  Kobayashi, Akio / Oku, Takahiro / Fujita, Yuya / Sato, Shoei (2013): "Lightly supervised training for risk-based discriminative language models", In INTERSPEECH-2013, 1213-1217.