arXiv Analytics

Sign in

arXiv:1805.09545 [math.OC]AbstractReferencesReviewsResources

On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport

Lenaic Chizat, Francis Bach

Published 2018-05-24Version 1

Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.

Related articles: Most relevant | Search more
arXiv:2304.09537 [math.OC] (Published 2023-04-19)
Global Convergence of Algorithms Based on Unions of Nonexpansive Maps
arXiv:2202.02914 [math.OC] (Published 2022-02-07)
Global convergence and optimality of the heavy ball method for non-convex optimization
arXiv:1910.09496 [math.OC] (Published 2019-10-21)
Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence