arXiv Analytics

Sign in

arXiv:1705.09280 [stat.ML]AbstractReferencesReviewsResources

Implicit Regularization in Matrix Factorization

Suriya Gunasekar, Blake Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, Nathan Srebro

Published 2017-05-25Version 1

We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix $X$ with gradient descent on a factorization of $X$. We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution.

Related articles: Most relevant | Search more
arXiv:1806.00811 [stat.ML] (Published 2018-06-03)
Causal Inference with Noisy and Missing Covariates via Matrix Factorization
arXiv:2101.04968 [stat.ML] (Published 2021-01-13)
Learning with Gradient Descent and Weakly Convex Losses
arXiv:1710.10345 [stat.ML] (Published 2017-10-27)
The Implicit Bias of Gradient Descent on Separable Data