14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Comparing Computation in Gaussian mixture and Neural Network Based Large-Vocabulary Speech Recognition

Vishwa Gupta, Gilles Boulianne

CRIM, Canada

In this paper we look at real-time computing issues in large vocabulary speech recognition. We use the French broadcast audio transcription task from ETAPE 2011 for this evaluation. We compare word error rate (WER) versus overall computing time for hidden Markov models with Gaussian mixtures (GMM-HMM) and deep neural networks (DNN-HMM). We show that for a similar computing during recognition, the DNN-HMM combination is superior to the GMM-HMM. For a real-time computing scenario, the error rate for the ETAPE dev set is 23.5% for DNN-HMM versus 27.9% for the GMM-HMM: a significant difference in accuracy for comparable computing. Rescoring lattices (generated by DNN-HMM acoustic model) with a quadgram language model (LM), and then with a neural net LM reduces the WER to 22.0% while still providing real-time computing.

Full Paper

Bibliographic reference.  Gupta, Vishwa / Boulianne, Gilles (2013): "Comparing computation in Gaussian mixture and neural network based large-vocabulary speech recognition", In INTERSPEECH-2013, 617-621.