arXiv Analytics

Sign in

arXiv:1804.04529 [cs.LG]AbstractReferencesReviewsResources

Online convex optimization and no-regret learning: Algorithms, guarantees and applications

E. Veronica Belmega, Panayotis Mertikopoulos, Romain Negrel, Luca Sanguinetti

Published 2018-04-12Version 1

Spurred by the enthusiasm surrounding the "Big Data" paradigm, the mathematical and algorithmic tools of online optimization have found widespread use in problems where the trade-off between data exploration and exploitation plays a predominant role. This trade-off is of particular importance to several branches and applications of signal processing, such as data mining, statistical inference, multimedia indexing and wireless communications (to name but a few). With this in mind, the aim of this tutorial paper is to provide a gentle introduction to online optimization and learning algorithms that are asymptotically optimal in hindsight - i.e., they approach the performance of a virtual algorithm with unlimited computational power and full knowledge of the future, a property known as no-regret. Particular attention is devoted to identifying the algorithms' theoretical performance guarantees and to establish links with classic optimization paradigms (both static and stochastic). To allow a better understanding of this toolbox, we provide several examples throughout the tutorial ranging from metric learning to wireless resource allocation problems.

Comments: 34 pages, 15 figures
Categories: cs.LG, math.OC, stat.ML
Subjects: 68Q32, 90C90, 68T05, 91A26, 94A12
Related articles: Most relevant | Search more
arXiv:1201.5338 [cs.LG] (Published 2012-01-25, updated 2012-09-21)
On Constrained Spectral Clustering and Its Applications
arXiv:1812.08434 [cs.LG] (Published 2018-12-20)
Graph Neural Networks: A Review of Methods and Applications
arXiv:1905.00331 [cs.LG] (Published 2019-05-01)
High-Performance Support Vector Machines and Its Applications