arXiv:2105.14648 [cs.LG]AbstractReferencesReviewsResources
Sharper bounds for online learning of smooth functions of a single variable
Published 2021-05-30Version 1
We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions. Let $\mathcal{F}_q$ be the class of absolutely continuous functions $f: [0, 1] \rightarrow \mathbb{R}$ with $||f'||_q \le 1$, and define $opt_p(\mathcal{F}_q)$ as the best possible bound on the worst-case sum of the $p^{th}$ powers of the absolute prediction errors over any number of trials. Kimber and Long (Theoretical Computer Science, 1995) proved for $q \ge 2$ that $opt_p(\mathcal{F}_q) = 1$ when $p \ge 2$ and $opt_p(\mathcal{F}_q) = \infty$ when $p = 1$. For $1 < p < 2$ with $p = 1+\epsilon$, the only known bound was $opt_p(\mathcal{F}_{q}) = O(\epsilon^{-1})$ from the same paper. We show for all $\epsilon \in (0, 1)$ and $q \ge 2$ that $opt_{1+\epsilon}(\mathcal{F}_q) = \Theta(\epsilon^{-\frac{1}{2}})$, where the constants in the bound do not depend on $q$. We also show that $opt_{1+\epsilon}(\mathcal{F}_{\infty}) = \Theta(\epsilon^{-\frac{1}{2}})$.