EUROSPEECH 2001 Scandinavia
7th European Conference on Speech Communication and Technology

Aalborg, Denmark
September 3-7, 2001


Multi-Class Composite N-Gram Language Model Using Multiple Word Clusters and Word Successions

Shuntaro Isogai (1), Katsuhiko Shirai (1), Hirofumi Yamamoto (2), Yoshinori Sagisaka (2)

(1) Waseda University, Japan
(2) ATR Spoken Language Translation, Research Laboratories, Japan

In this paper, a new language model, the Multi-Class Composite Ngram, is proposed to avoid a data sparseness problem in small amount of training data. The Multi-Class Composite N-gram maintains an accurate word prediction capability and reliability for sparse data with a compact model size based on multiple word clusters, so-called Multi-Classes. In the Multi-Class, the statistical connectivity at each position of the N-grams is regarded as word attributes, and one word cluster each is created to represent positional attributes. Furthermore, by introducing higher order word N-grams through the grouping of frequent word successions, Multi-Class N-grams are extended to Multi-Class Composite N-grams. In experiments, the Multi-Class Composite Ngrams result in 9.5% lower perplexity and a 16% lower word error rate in speech recognition with a 40% smaller parameter size than conventional word 3-grams.

Full Paper

Bibliographic reference.  Isogai, Shuntaro / Shirai, Katsuhiko / Yamamoto, Hirofumi / Sagisaka, Yoshinori (2001): "Multi-class composite n-gram language model using multiple word clusters and word successions", In EUROSPEECH-2001, 25-28.