Neural Language Modeling with Implicit Cache Pointers

Ke Li, Daniel Povey, Sanjeev Khudanpur

A cache-inspired approach is proposed for neural language models (LMs) to improve long-range dependency and better predict rare words from long contexts. This approach is a simpler alternative to attention-based pointer mechanism that enables neural LMs to reproduce words from recent history. Without using attention and mixture structure, the method only involves appending extra tokens that represent words in history to the output layer of a neural LM and modifying training supervisions accordingly. A memory-augmentation unit is introduced to learn words that are particularly likely to repeat. We experiment with both recurrent neural network- and Transformer-based LMs. Perplexity evaluation on Penn Treebank and WikiText-2 shows the proposed model outperforms both LSTM and LSTM with attention-based pointer mechanism and is more effective on rare words. N-best rescoring experiments on Switchboard indicate that it benefits both very rare and frequent words. However, it is challenging for the proposed model as well as two other models with attention-based pointer mechanism to obtain good overall WER reductions.

 DOI: 10.21437/Interspeech.2020-3020

Cite as: Li, K., Povey, D., Khudanpur, S. (2020) Neural Language Modeling with Implicit Cache Pointers. Proc. Interspeech 2020, 3625-3629, DOI: 10.21437/Interspeech.2020-3020.

  author={Ke Li and Daniel Povey and Sanjeev Khudanpur},
  title={{Neural Language Modeling with Implicit Cache Pointers}},
  booktitle={Proc. Interspeech 2020},