arXiv Analytics

Sign in

arXiv:1711.03953 [cs.CL]AbstractReferencesReviewsResources

Breaking the Softmax Bottleneck: A High-Rank RNN Language Model

Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W. Cohen

Published 2017-11-10Version 1

We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively.

Related articles: Most relevant | Search more
arXiv:1708.00781 [cs.CL] (Published 2017-08-02)
Dynamic Entity Representations in Neural Language Models
arXiv:1606.00499 [cs.CL] (Published 2016-06-01)
Generalizing and Hybridizing Count-based and Neural Language Models
arXiv:1707.05589 [cs.CL] (Published 2017-07-18)
On the State of the Art of Evaluation in Neural Language Models