arXiv Analytics

Sign in

arXiv:2006.08643 [stat.ML]AbstractReferencesReviewsResources

On the training dynamics of deep networks with $L_2$ regularization

Aitor Lewkowycz, Guy Gur-Ari

Published 2020-06-15Version 1

We study the role of $L_2$ regularization in deep learning, and uncover simple relations between the performance of the model, the $L_2$ coefficient, the learning rate, and the number of training steps. These empirical relations hold when the network is overparameterized. They can be used to predict the optimal regularization parameter of a given model. In addition, based on these observations we propose a dynamical schedule for the regularization parameter that improves performance and speeds up training. We test these proposals in modern image classification settings. Finally, we show that these empirical relations can be understood theoretically in the context of infinitely wide networks. We derive the gradient flow dynamics of such networks, and compare the role of $L_2$ regularization in this context with that of linear models.

Related articles: Most relevant | Search more
arXiv:1901.09021 [stat.ML] (Published 2019-01-25)
Complexity of Linear Regions in Deep Networks
arXiv:1402.5836 [stat.ML] (Published 2014-02-24, updated 2014-09-14)
Avoiding pathologies in very deep networks
arXiv:2002.08253 [stat.ML] (Published 2020-02-19)
Distance-Based Regularisation of Deep Networks for Fine-Tuning