arXiv Analytics

Sign in

arXiv:1602.07868 [cs.LG]AbstractReferencesReviewsResources

Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

Tim Salimans, Diederik P. Kingma

Published 2016-02-25Version 1

We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.

Related articles: Most relevant | Search more
arXiv:1710.02338 [cs.LG] (Published 2017-10-06)
Projection Based Weight Normalization for Deep Neural Networks
arXiv:1711.02114 [cs.LG] (Published 2017-11-06)
Bounding and Counting Linear Regions of Deep Neural Networks
arXiv:1605.05359 [cs.LG] (Published 2016-05-17)
Hierarchical Reinforcement Learning using Spatio-Temporal Abstractions and Deep Neural Networks