arXiv Analytics

Sign in

arXiv:2006.10840 [stat.ML]AbstractReferencesReviewsResources

Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

Nicole Mücke, Enrico Reiss

Published 2020-06-18Version 1

Stochastic Gradient Descent (SGD) has become the method of choice for solving a broad range of machine learning problems. However, some of its learning properties are still not fully understood. We consider least squares learning in reproducing kernel Hilbert spaces (RKHSs) and extend the classical SGD analysis to a learning setting in Hilbert scales, including Sobolev spaces and Diffusion spaces on compact Riemannian manifolds. We show that even for well-specified models, violation of a traditional benchmark smoothness assumption has a tremendous effect on the learning rate. In addition, we show that for miss-specified models, preconditioning in an appropriate Hilbert scale helps to reduce the number of iterations, i.e. allowing for "earlier stopping".

Related articles: Most relevant | Search more
arXiv:2409.07434 [stat.ML] (Published 2024-09-11)
Asymptotics of Stochastic Gradient Descent with Dropout Regularization in Linear Models
arXiv:1911.01483 [stat.ML] (Published 2019-11-04)
Statistical Inference for Model Parameters in Stochastic Gradient Descent via Batch Means
arXiv:1710.06382 [stat.ML] (Published 2017-10-17)
Convergence diagnostics for stochastic gradient descent with constant step size