arXiv Analytics

Sign in

arXiv:2001.02323 [cs.LG]AbstractReferencesReviewsResources

On Thompson Sampling for Smoother-than-Lipschitz Bandits

James A. Grant, David S. Leslie

Published 2020-01-08Version 1

Thompson Sampling is a well established approach to bandit and reinforcement learning problems. However its use in continuum armed bandit problems has received relatively little attention. We provide the first bounds on the regret of Thompson Sampling for continuum armed bandits under weak conditions on the function class containing the true function and sub-exponential observation noise. Our bounds are realised by analysis of the eluder dimension, a recently proposed measure of the complexity of a function class, which has been demonstrated to be useful in bounding the Bayesian regret of Thompson Sampling for simpler bandit problems under sub-Gaussian observation noise. We derive a new bound on the eluder dimension for classes of functions with Lipschitz derivatives, and generalise previous analyses in multiple regards.

Comments: Accepted to AISTATS 2020. 26 pages, 2 figures
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2104.06970 [cs.LG] (Published 2021-04-14)
Eluder Dimension and Generalized Rank
arXiv:1708.04781 [cs.LG] (Published 2017-08-16)
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors
arXiv:2006.06372 [cs.LG] (Published 2020-06-11)
TS-UCB: Improving on Thompson Sampling With Little to No Additional Computation