arXiv Analytics

Sign in

arXiv:1209.3352 [cs.LG]AbstractReferencesReviewsResources

Thompson Sampling for Contextual Bandits with Linear Payoffs

Shipra Agrawal, Navin Goyal

Published 2012-09-15, updated 2014-02-03Version 4

Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied versions of the contextual bandits problem. We provide the first theoretical guarantees for the contextual version of Thompson Sampling. We prove a high probability regret bound of $\tilde{O}(d^{3/2}\sqrt{T})$ (or $\tilde{O}(d\sqrt{T \log(N)})$), which is the best regret bound achieved by any computationally efficient algorithm available for this problem in the current literature, and is within a factor of $\sqrt{d}$ (or $\sqrt{\log(N)}$) of the information-theoretic lower bound for this problem.

Comments: Improvements from previous version: (1) dependence on d improved from d^2 to d^{3/2} (2) Simpler and more modular proof techniques (3) bounds in terms of log(N) added
Categories: cs.LG, cs.DS, stat.ML
Subjects: 68W40, 68Q25, F.2.0
Related articles: Most relevant | Search more
arXiv:1209.3353 [cs.LG] (Published 2012-09-15)
Further Optimal Regret Bounds for Thompson Sampling
arXiv:2206.00314 [cs.LG] (Published 2022-06-01)
Contextual Bandits with Knapsacks for a Conversion Model
arXiv:1903.08600 [cs.LG] (Published 2019-03-20)
Contextual Bandits with Random Projection