arXiv Analytics

Sign in

arXiv:1905.12624 [cs.LG]AbstractReferencesReviewsResources

Combinatorial Bandits with Full-Bandit Feedback: Sample Complexity and Regret Minimization

Idan Rejwan, Yishay Mansour

Published 2019-05-28Version 1

Combinatorial Bandits generalize multi-armed bandits, where k out of n arms are chosen at each round and the sum of the rewards is gained. We address the full-bandit feedback, in which the agent observes only the sum of rewards, in contrast to the semi-bandit feedback, in which the agent observes also the individual arms' rewards. We present the Combinatorial Successive Accepts and Rejects (CSAR) algorithm, which is a generalization of the SAR algorithm (Bubeck et al. 2013) for the combinatorial setting. Our main contribution is an efficient sampling scheme that uses Hadamard matrices in order to estimate accurately the individual arms' expected rewards. We discuss two variants of the algorithm, the first minimizes the sample complexity and the second minimizes the regret. For the sample complexity we also prove a matching lower bound that shows it is optimal. For the regret minimization, we prove a lower bound which is tight up to a factor of k. Finally, we run experiments and show that our algorithm outperforms other methods.

Related articles: Most relevant | Search more
arXiv:1206.6461 [cs.LG] (Published 2012-06-27)
On the Sample Complexity of Reinforcement Learning with a Generative Model
arXiv:2106.07046 [cs.LG] (Published 2021-06-13)
Towards Tight Bounds on the Sample Complexity of Average-reward MDPs
arXiv:1806.02970 [cs.LG] (Published 2018-06-08)
PAC Ranking from Pairwise and Listwise Queries: Lower Bounds and Upper Bounds