arXiv Analytics

Sign in

arXiv:1209.3353 [cs.LG]AbstractReferencesReviewsResources

Further Optimal Regret Bounds for Thompson Sampling

Shipra Agrawal, Navin Goyal

Published 2012-09-15Version 1

Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state of the art methods. In this paper, we provide a novel regret analysis for Thompson Sampling that simultaneously proves both the optimal problem-dependent bound of $(1+\epsilon)\sum_i \frac{\ln T}{\Delta_i}+O(\frac{N}{\epsilon^2})$ and the first near-optimal problem-independent bound of $O(\sqrt{NT\ln T})$ on the expected regret of this algorithm. Our near-optimal problem-independent bound solves a COLT 2012 open problem of Chapelle and Li. The optimal problem-dependent regret bound for this problem was first proven recently by Kaufmann et al. [ALT 2012]. Our novel martingale-based analysis techniques are conceptually simple, easily extend to distributions other than the Beta distribution, and also extend to the more general contextual bandits setting [Manuscript, Agrawal and Goyal, 2012].

Comments: arXiv admin note: substantial text overlap with arXiv:1111.1797
Categories: cs.LG, cs.DS, stat.ML
Subjects: 68W40, 68Q25, F.2.0
Related articles: Most relevant | Search more
arXiv:1209.3352 [cs.LG] (Published 2012-09-15, updated 2014-02-03)
Thompson Sampling for Contextual Bandits with Linear Payoffs
arXiv:1708.04781 [cs.LG] (Published 2017-08-16)
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors
arXiv:2006.06372 [cs.LG] (Published 2020-06-11)
TS-UCB: Improving on Thompson Sampling With Little to No Additional Computation