13th Annual Conference of the International Speech Communication Association

Portland, OR, USA
September 9-13, 2012

Towards Recurrent Neural Networks Language Models with Linguistic and Contextual Features

Yangyang Shi, Pascal Wiggers, Catholijn M. Jonker

Interactive Intelligence, Delft University of Technology, Delft, The Netherlands

Recent studies show that recurrent neural network language models (RNNLM) perform better than traditional language models such as smoothed n-grams. For traditional models it is known that the addition of for example part-of-speech information and topical information can improve performance. In this paper we investigate the usefulness of additional features for RNNLM. We look at four types of features: POS tags, lemmas, and the topics and the socio-situational setting of a conversation. In our experiments, almost all RNNLM models that make use of extra information outperform our baseline RNNLM model in terms of both perplexity and word prediction accuracy. Whereas the baseline model has a perplexity of 114.79, the model that uses a combination of POS tags, socio-situational settings and lemmas achieves the lowest perplexity result of 83.59, and the combination of all 4 types of features, using a network with 500 hidden neurons, achieves the highest word prediction accuracy of 23.11%.

Index Terms: socio-situational setting, part of speech, lemma, topic, recurrent neural networks.

Full Paper

Bibliographic reference.  Shi, Yangyang / Wiggers, Pascal / Jonker, Catholijn M. (2012): "Towards recurrent neural networks language models with linguistic and contextual features", In INTERSPEECH-2012, 1664-1667.