arXiv Analytics

Sign in

arXiv:2105.14648 [cs.LG]AbstractReferencesReviewsResources

Sharper bounds for online learning of smooth functions of a single variable

Jesse Geneson

Published 2021-05-30Version 1

We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions. Let $\mathcal{F}_q$ be the class of absolutely continuous functions $f: [0, 1] \rightarrow \mathbb{R}$ with $||f'||_q \le 1$, and define $opt_p(\mathcal{F}_q)$ as the best possible bound on the worst-case sum of the $p^{th}$ powers of the absolute prediction errors over any number of trials. Kimber and Long (Theoretical Computer Science, 1995) proved for $q \ge 2$ that $opt_p(\mathcal{F}_q) = 1$ when $p \ge 2$ and $opt_p(\mathcal{F}_q) = \infty$ when $p = 1$. For $1 < p < 2$ with $p = 1+\epsilon$, the only known bound was $opt_p(\mathcal{F}_{q}) = O(\epsilon^{-1})$ from the same paper. We show for all $\epsilon \in (0, 1)$ and $q \ge 2$ that $opt_{1+\epsilon}(\mathcal{F}_q) = \Theta(\epsilon^{-\frac{1}{2}})$, where the constants in the bound do not depend on $q$. We also show that $opt_{1+\epsilon}(\mathcal{F}_{\infty}) = \Theta(\epsilon^{-\frac{1}{2}})$.

Related articles: Most relevant | Search more
arXiv:2301.01434 [cs.LG] (Published 2023-01-04)
Online Learning of Smooth Functions
arXiv:1508.00842 [cs.LG] (Published 2015-08-04)
Perceptron like Algorithms for Online Learning to Rank
arXiv:2009.11942 [cs.LG] (Published 2020-09-24)
Online Learning With Adaptive Rebalancing in Nonstationary Environments