arXiv Analytics

Sign in

arXiv:2007.10099 [cs.LG]AbstractReferencesReviewsResources

Early Stopping in Deep Networks: Double Descent and How to Eliminate it

Reinhard Heckel, Fatih Furkan Yilmaz

Published 2020-07-20Version 1

Over-parameterized models, in particular deep networks, often exhibit a double descent phenomenon, where as a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs, and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent arises for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different times, and eliminating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a two-layer neural network, where the first and second layers each govern a bias-variance tradeoff. Inspired by this theory, we study a five-layer convolutional network empirically and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance significantly.

Related articles: Most relevant | Search more
arXiv:1912.08286 [cs.LG] (Published 2019-12-17)
On the Bias-Variance Tradeoff: Textbooks Need an Update
arXiv:1908.09375 [cs.LG] (Published 2019-08-25)
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
arXiv:2002.04710 [cs.LG] (Published 2020-02-11)
Unique Properties of Wide Minima in Deep Networks