4th International Conference on Spoken Language Processing

Philadelphia, PA, USA
October 3-6, 1996

Quantizing Mixture-weights in a Tied-mixture HMM

Sunil K. Gupta, Frank K. Soong, Raziel Haimi-Cohen

Bell Laboratories, Lucent Technologies, Murray Hill, NJ, USA

In this paper, we describe new techniques to significantly reduce computational, storage and memory access requirements of a tied-mixture HMM based speech recognition system. Although continuous mixture HMMs offer improved recognition performance, we show that tied-mixture HMMs may offer significant advantage in complexity reduction for low-cost implementations. In particular, we consider two tasks: (a) connected digit recognition in car noise; and (b) sub-word modeling for command word recognition in a noisy office environment. We show that quantization of mixture weights can provide an almost three fold reduction in mixture-weight storage requirements without any significant loss in recognition performance. Furthermore, we show that by combining mixture-weight quantization with techniques such as VQ-Assist the computational and memory access requirements can be reduced by almost 60-80% without any degradation in recognition performance.

Full Paper

Bibliographic reference.  Gupta, Sunil K. / Soong, Frank K. / Haimi-Cohen, Raziel (1996): "Quantizing mixture-weights in a tied-mixture HMM", In ICSLP-1996, 1828-1831.