arXiv Analytics

Sign in

arXiv:2005.09841 [cs.LG]AbstractReferencesReviewsResources

Best Arm Identification in Spectral Bandits

Tomáš Kocák, Aurélien Garivier

Published 2020-05-20Version 1

We study best-arm identification with fixed confidence in bandit models with graph smoothness constraint. We provide and analyze an efficient gradient ascent algorithm to compute the sample complexity of this problem as a solution of a non-smooth max-min problem (providing in passing a simplified analysis for the unconstrained case). Building on this algorithm, we propose an asymptotically optimal strategy. We furthermore illustrate by numerical experiments both the strategy's efficiency and the impact of the smoothness constraint on the sample complexity. Best Arm Identification (BAI) is an important challenge in many applications ranging from parameter tuning to clinical trials. It is now very well understood in vanilla bandit models, but real-world problems typically involve some dependency between arms that requires more involved models. Assuming a graph structure on the arms is an elegant practical way to encompass this phenomenon, but this had been done so far only for regret minimization. Addressing BAI with graph constraints involves delicate optimization problems for which the present paper offers a solution.

Comments: To be published in International Joint Conference on Artificial Intelligence
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1811.07476 [cs.LG] (Published 2018-11-19, updated 2019-01-28)
Best Arm Identification in Linked Bandits
arXiv:1206.6461 [cs.LG] (Published 2012-06-27)
On the Sample Complexity of Reinforcement Learning with a Generative Model
arXiv:1905.12624 [cs.LG] (Published 2019-05-28)
Combinatorial Bandits with Full-Bandit Feedback: Sample Complexity and Regret Minimization