arXiv Analytics

Sign in

arXiv:1709.01427 [stat.ML]AbstractReferencesReviewsResources

Stochastic Gradient Descent: Going As Fast As Possible But Not Faster

Alice Schoenauer-Sebag, Marc Schoenauer, Michèle Sebag

Published 2017-09-05Version 1

When applied to training deep neural networks, stochastic gradient descent (SGD) often incurs steady progression phases, interrupted by catastrophic episodes in which loss and gradient norm explode. A possible mitigation of such events is to slow down the learning process. This paper presents a novel approach to control the SGD learning rate, that uses two statistical tests. The first one, aimed at fast learning, compares the momentum of the normalized gradient vectors to that of random unit vectors and accordingly gracefully increases or decreases the learning rate. The second one is a change point detection test, aimed at the detection of catastrophic learning episodes; upon its triggering the learning rate is instantly halved. Both abilities of speeding up and slowing down the learning rate allows the proposed approach, called SALeRA, to learn as fast as possible but not faster. Experiments on standard benchmarks show that SALeRA performs well in practice, and compares favorably to the state of the art.

Related articles: Most relevant | Search more
arXiv:1908.07607 [stat.ML] (Published 2019-08-20)
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
arXiv:2108.09507 [stat.ML] (Published 2021-08-21)
How Can Increased Randomness in Stochastic Gradient Descent Improve Generalization?
arXiv:2105.01650 [stat.ML] (Published 2021-05-04)
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis