arXiv Analytics

Sign in

arXiv:1608.03933 [cs.LG]AbstractReferencesReviewsResources

Improved dynamic regret for non-degeneracy functions

Lijun Zhang, Tianbao Yang, Jinfeng Yi, Rong Jin, Zhi-Hua Zhou

Published 2016-08-13Version 1

Recently, there has been a growing research interest in the analysis of dynamic regret, which measures the performance of an online learner against a sequence of local minimizers. By exploiting the strong convexity, previous studies have shown that the dynamic regret can be upper bounded by the path-length of the comparator sequence. In this paper, we illustrate that the dynamic regret can be further improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degeneracy conditions. Specifically, we introduce the squared path-length, which could be much smaller than the path-length, as a new regularity of the comparator sequence. When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length. We then extend our theoretical guarantee to functions that are semi-strongly convex or self-concordant. To the best of our knowledge, this is the first time the semi-strong convexity and the self-concordance are utilized to tighten the dynamic regret.

Related articles: Most relevant | Search more
arXiv:2006.05876 [cs.LG] (Published 2020-06-10)
Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions
arXiv:2201.06532 [cs.LG] (Published 2022-01-17, updated 2022-03-07)
A New Look at Dynamic Regret for Non-Stationary Stochastic Bandits
arXiv:1810.03024 [cs.LG] (Published 2018-10-06)
Learning to Optimize under Non-Stationarity