Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Reducing Computation on Parallel Decoding Using Frame-Wise Confidence Scores

Tomohiro Hakamata, Akinobu Lee, Yoshihiko Nankaku, Keiichi Tokuda

Nagoya Institute of Technology, Japan

Parallel decoding based on multiple models has been studied to cover various conditions and speakers at a time on a speech recognition system. However, running many recognizers in parallel applying all models causes the total computational cost to grow in proportion to the number of models. In this paper, an efficient way of finding and pruning unpromising decoding processes during search is proposed. By comparing temporal search statistics at each frame among all decoders, decoders with relatively unmatched model can be pruned in the middle of recognition process to save computational cost. This method allows the model structures to be mutually independent. Two frame-wise pruning measures based on maximum hypothesis likelihoods and top confidence scores respectively, and their combinations are investigated. Experimental results on parallel recognition of seven acoustic models showed that by using the both criteria, the total computational cost was reduced to 36.53% compared to full computation without degrading the recognition accuracy.

Full Paper

Bibliographic reference.  Hakamata, Tomohiro / Lee, Akinobu / Nankaku, Yoshihiko / Tokuda, Keiichi (2006): "Reducing computation on parallel decoding using frame-wise confidence scores", In INTERSPEECH-2006, paper 1878-Wed1CaP.10.