arXiv:1905.12721 [cs.LG]AbstractReferencesReviewsResources
Matrix-Free Preconditioning in Online Learning
Published 2019-05-29Version 1
We provide an online convex optimization algorithm with regret that interpolates between the regret of an algorithm using an optimal preconditioning matrix and one using a diagonal preconditioning matrix. Our regret bound is never worse than that obtained by diagonal preconditioning, and in certain setting even surpasses that of algorithms with full-matrix preconditioning. Importantly, our algorithm runs in the same time and space complexity as online gradient descent. Along the way we incorporate new techniques that mildly streamline and improve logarithmic factors in prior regret analyses. We conclude by benchmarking our algorithm on synthetic data and deep learning tasks.
Comments: ICML 2019
Related articles: Most relevant | Search more
arXiv:1508.00842 [cs.LG] (Published 2015-08-04)
Perceptron like Algorithms for Online Learning to Rank
arXiv:1810.01920 [cs.LG] (Published 2018-10-03)
Generalized Inverse Optimization through Online Learning
arXiv:1902.07286 [cs.LG] (Published 2019-02-19)
Online Learning with Continuous Variations: Dynamic Regret and Reductions