arXiv Analytics

Sign in

arXiv:1901.09296 [cs.CL]AbstractReferencesReviewsResources

Variational Smoothing in Recurrent Neural Network Language Models

Lingpeng Kong, Gabor Melis, Wang Ling, Lei Yu, Dani Yogatama

Published 2019-01-27Version 1

We present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution). We use this insight to propose a more principled method to apply at prediction time and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output embedding matrices and an element-wise variational smoothing method. We empirically verify our analysis on two benchmark language modeling datasets and demonstrate performance improvements over existing data noising methods.

Related articles: Most relevant | Search more
arXiv:1502.00512 [cs.CL] (Published 2015-02-02)
Scaling Recurrent Neural Network Language Models
arXiv:1806.10306 [cs.CL] (Published 2018-06-27)
Unsupervised and Efficient Vocabulary Expansion for Recurrent Neural Network Language Models in ASR
arXiv:1906.04726 [cs.CL] (Published 2019-06-11)
What Kind of Language Is Hard to Language-Model?