arXiv Analytics

Sign in

arXiv:1811.07476 [cs.LG]AbstractReferencesReviewsResources

Best Arm Identification in Linked Bandits

Anant Gupta

Published 2018-11-19, updated 2019-01-28Version 2

We consider the problem of best arm identification in a variant of multi-armed bandits called linked bandits. In a single interaction with linked bandits, multiple arms are played sequentially until one of them receives a positive reward. Since each interaction provides feedback about more than one arm, the sample complexity can be much lower than in the regular bandit setting. We propose an algorithm for linked bandits, that combines a novel subroutine to perform uniform sampling with a known optimal algorithm for regular bandits. We prove almost matching upper and lower bounds on the sample complexity of best arm identification in linked bandits. These bounds have an interesting structure, with an explicit dependence on the mean rewards of the arms, not just the gaps. We also corroborate our theoretical results with experiments.

Related articles: Most relevant | Search more
arXiv:2005.09841 [cs.LG] (Published 2020-05-20)
Best Arm Identification in Spectral Bandits
arXiv:1207.1366 [cs.LG] (Published 2012-07-04)
Learning Factor Graphs in Polynomial Time & Sample Complexity
arXiv:1402.4844 [cs.LG] (Published 2014-02-19, updated 2016-05-26)
Subspace Learning with Partial Information