arXiv Analytics

Sign in

arXiv:2007.03479 [cs.LG]AbstractReferencesReviewsResources

Dynamic Regret of Convex and Smooth Functions

Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

Published 2020-07-07Version 1

We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence. Let $T$ be the time horizon and $P_T$ be the path-length that essentially reflects the non-stationarity of environments, the state-of-the-art dynamic regret is $\mathcal{O}(\sqrt{T(1+P_T)})$. Although this bound is proved to be minimax optimal for convex functions, in this paper, we demonstrate that it is possible to further enhance the dynamic regret by exploiting the smoothness condition. Specifically, we propose novel online algorithms that are capable of leveraging smoothness and replace the dependence on $T$ in the dynamic regret by problem-dependent quantities: the variation in gradients of loss functions, and the cumulative loss of the comparator sequence. These quantities are at most $\mathcal{O}(T)$ while could be much smaller in benign environments. Therefore, our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and meanwhile guarantee the same rate in the worst case.

Related articles: Most relevant | Search more
arXiv:2102.09305 [cs.LG] (Published 2021-02-18)
Boosting for Online Convex Optimization
arXiv:2009.14436 [cs.LG] (Published 2020-09-30)
Online Convex Optimization in Changing Environments and its Application to Resource Allocation
arXiv:2402.08621 [cs.LG] (Published 2024-02-13, updated 2024-05-13)
A Generalized Approach to Online Convex Optimization