INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Recurrent Neural Network Based Language Model Personalization by Social Network Crowdsourcing

Tsung-Hsien Wen (1), Aaron Heidel (1), Hung-yi Lee (2), Yu Tsao (2), Lin-shan Lee (1)

(1) National Taiwan University, Taiwan
(2) Academia Sinica, Taiwan

Speech recognition has become an important feature in smartphones in recent years. Different from traditional automatic speech recognition, the speech recognition on smartphones can take advantage of personalized language models to model the linguistic patterns and wording habits of a particular smartphone owner better. Owing to the popularity of social networks in recent years, personal texts and messages are no longer inaccessible. However, data sparseness is still an unsolved problem. In this paper, we propose a three-step adaptation approach to personalize recurrent neural network language models (RNNLMs). We believe that its capability to model word histories as distributed representations of arbitrary length can help mitigate the data sparseness problem. Furthermore, we also propose additional user-oriented features to empower the RNNLMs with stronger capabilities for personalization. The experiments on a Facebook dataset showed that the proposed method not only drastically reduced the model perplexity in preliminary experiments, but also moderately reduced the word error rate in n-best rescoring tests.

Full Paper

Bibliographic reference.  Wen, Tsung-Hsien / Heidel, Aaron / Lee, Hung-yi / Tsao, Yu / Lee, Lin-shan (2013): "Recurrent neural network based language model personalization by social network crowdsourcing", In INTERSPEECH-2013, 2703-2707.