Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Acoustic Modeling for Spoken Dialogue Systems Based on Unsupervised Utterance-Based Selective Training

Tobias Cincarek, Tomoki Toda, Hiroshi Saruwatari, Kiyohiro Shikano

Nara Institute of Science & Technology, Japan

The construction of high-performance acoustic models for certain speech recognition tasks is very costly and time-consuming, since it most often requires the collection and transcription of large amounts of task-specific speech data. In this paper acoustic modeling for spoken dialogue systems based on unsupervised selective training is examined. The main idea is to select those training utterances from an (untranscribed) speech data pool, so that the likelihood of a separate small (transcribed) development speech data set is maximized. If only the selected data are employed to retrain the initial acoustic models, a better performance is achieved than when retraining with all collected data. Using the proposed approach it is also possible to considerably reduce the costs for human-labeling of the speech data without compromising the performance. Furthermore, the method provides means for automatic task-adaptation of acoustic models, e.g. to adult or children speech. This is important, since detailed information about each automatically collected utterance is usually not available.

Full Paper

Bibliographic reference.  Cincarek, Tobias / Toda, Tomoki / Saruwatari, Hiroshi / Shikano, Kiyohiro (2006): "Acoustic modeling for spoken dialogue systems based on unsupervised utterance-based selective training", In INTERSPEECH-2006, paper 1481-Wed2A2O.2.