arXiv Analytics

Sign in

arXiv:1902.07286 [cs.LG]AbstractReferencesReviewsResources

Online Learning with Continuous Variations: Dynamic Regret and Reductions

Ching-An Cheng, Jonathan Lee, Ken Goldberg, Byron Boots

Published 2019-02-19Version 1

We study the dynamic regret of a new class of online learning problems, in which the gradient of the loss function changes continuously across rounds with respect to the learner's decisions. This setup is motivated by the use of online learning as a tool to analyze the performance of iterative algorithms. Our goal is to identify interpretable dynamic regret rates that explicitly consider the loss variations as consequences of the learner's decisions as opposed to external constraints. We show that achieving sublinear dynamic regret in general is equivalent to solving certain variational inequalities, equilibrium problems, and fixed-point problems. Leveraging this identification, we present necessary and sufficient conditions for the existence of efficient algorithms that achieve sublinear dynamic regret. Furthermore, we show a reduction from dynamic regret to both static regret and convergence rate to equilibriums in the aforementioned problems, which allows us to analyze the dynamic regret of many existing learning algorithms in few steps.

Related articles: Most relevant | Search more
arXiv:1905.12721 [cs.LG] (Published 2019-05-29)
Matrix-Free Preconditioning in Online Learning
arXiv:1810.01920 [cs.LG] (Published 2018-10-03)
Generalized Inverse Optimization through Online Learning
arXiv:1711.03343 [cs.LG] (Published 2017-11-09)
Analysis of Dropout in Online Learning