arXiv Analytics

Sign in

arXiv:2312.12137 [cs.LG]AbstractReferencesReviewsResources

Best Arm Identification with Fixed Budget: A Large Deviation Perspective

Po-An Wang, Ruo-Chun Tzeng, Alexandre Proutiere

Published 2023-12-19Version 1

We consider the problem of identifying the best arm in stochastic Multi-Armed Bandits (MABs) using a fixed sampling budget. Characterizing the minimal instance-specific error probability for this problem constitutes one of the important remaining open problems in MABs. When arms are selected using a static sampling strategy, the error probability decays exponentially with the number of samples at a rate that can be explicitly derived via Large Deviation techniques. Analyzing the performance of algorithms with adaptive sampling strategies is however much more challenging. In this paper, we establish a connection between the Large Deviation Principle (LDP) satisfied by the empirical proportions of arm draws and that satisfied by the empirical arm rewards. This connection holds for any adaptive algorithm, and is leveraged (i) to improve error probability upper bounds of some existing algorithms, such as the celebrated \sr (Successive Rejects) algorithm \citep{audibert2010best}, and (ii) to devise and analyze new algorithms. In particular, we present \sred (Continuous Rejects), a truly adaptive algorithm that can reject arms in {\it any} round based on the observed empirical gaps between the rewards of various arms. Applying our Large Deviation results, we prove that \sred enjoys better performance guarantees than existing algorithms, including \sr. Extensive numerical experiments confirm this observation.

Related articles: Most relevant | Search more
arXiv:1811.07476 [cs.LG] (Published 2018-11-19, updated 2019-01-28)
Best Arm Identification in Linked Bandits
arXiv:1608.06031 [cs.LG] (Published 2016-08-22)
Towards Instance Optimal Bounds for Best Arm Identification
arXiv:2005.09841 [cs.LG] (Published 2020-05-20)
Best Arm Identification in Spectral Bandits