arXiv Analytics

Sign in

arXiv:2110.06910 [stat.ML]AbstractReferencesReviewsResources

On the Double Descent of Random Features Models Trained with SGD

Fanghui Liu, Johan A. K. Suykens, Volkan Cevher

Published 2021-10-13, updated 2022-05-28Version 5

We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD). In this regime, we derive precise non-asymptotics error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.

Comments: 41 pages, 4 figures. This version provides more discussion on fair assumptions and tightness of our results
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2205.15549 [stat.ML] (Published 2022-05-31)
VC Theoretical Explanation of Double Descent
arXiv:1911.05822 [stat.ML] (Published 2019-11-13)
A Model of Double Descent for High-dimensional Binary Linear Classification
arXiv:2010.02681 [stat.ML] (Published 2020-10-06)
Kernel regression in high dimension: Refined analysis beyond double descent