Stochastic Convolutional Recurrent Networks for Language Modeling

Jen-Tzung Chien, Yu-Min Huang


Sequential learning using recurrent neural network (RNN) has been popularly developed for language modeling. An alternative sequential learning was implemented by the temporal convolutional network (TCN) which is seen as a variant of one-dimensional convolutional neural network (CNN). In general, RNN and TCN are fitted to capture the long-term and the short-term features over natural sentences, respectively. This paper is motivated to fulfill TCN as the encoder to extract short-term dependencies and then use RNN as the decoder for language modeling where the dependencies are integrated in a long-term semantic fashion for word prediction. A new sequential learning based on the convolutional recurrent network (CRN) is developed to characterize the local dependencies as well as the global semantics in word sequences. Importantly, the stochastic modeling for CRN is proposed to facilitate model capacity in neural language model where the uncertainties in training sentences are represented for variational inference. The complementary benefits of CNN and RNN are merged in sequential learning where the latent variable space is constructed as a generative model for sequential prediction. Experiments on language modeling demonstrate the effectiveness of stochastic convolutional recurrent network relative to the other sequential machines in terms of perplexity and word error rate.


 DOI: 10.21437/Interspeech.2020-1493

Cite as: Chien, J., Huang, Y. (2020) Stochastic Convolutional Recurrent Networks for Language Modeling. Proc. Interspeech 2020, 3640-3644, DOI: 10.21437/Interspeech.2020-1493.


@inproceedings{Chien2020,
  author={Jen-Tzung Chien and Yu-Min Huang},
  title={{Stochastic Convolutional Recurrent Networks for Language Modeling}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3640--3644},
  doi={10.21437/Interspeech.2020-1493},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1493}
}