arXiv Analytics

Sign in

arXiv:1710.06382 [stat.ML]AbstractReferencesReviewsResources

Convergence diagnostics for stochastic gradient descent with constant step size

Jerry Chee, Panos Toulis

Published 2017-10-17Version 1

Iterative procedures in stochastic optimization are typically comprised of a transient phase and a stationary phase. During the transient phase the procedure converges towards a region of interest, and during the stationary phase the procedure oscillates in a convergence region, commonly around a single point. In this paper, we develop a statistical diagnostic test to detect such phase transition in the context of stochastic gradient descent with constant step size. We present theoretical and experimental results suggesting that the diagnostic behaves as intended, and the region where the diagnostic is activated coincides with the convergence region. For a class of loss functions, we derive a closed-form solution describing such region, and support this theoretical result with simulated experiments. Finally, we suggest an application to speed up convergence of stochastic gradient descent by halving the learning rate each time convergence is detected. This leads to remarkable speed gains that are empirically comparable to state-of-art procedures.

Comments: 38 pages, 3 figures, 2 algorithms, 1 table
Related articles: Most relevant | Search more
arXiv:2409.07434 [stat.ML] (Published 2024-09-11)
Asymptotics of Stochastic Gradient Descent with Dropout Regularization in Linear Models
arXiv:1911.01483 [stat.ML] (Published 2019-11-04)
Statistical Inference for Model Parameters in Stochastic Gradient Descent via Batch Means
arXiv:2006.10840 [stat.ML] (Published 2020-06-18)
Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping