arXiv Analytics

Sign in

arXiv:1808.04008 [cs.LG]AbstractReferencesReviewsResources

PAC-Battling Bandits with Plackett-Luce: Tradeoff between Sample Complexity and Subset Size

Aditya Gopalan, Aadirupa Saha

Published 2018-08-12Version 1

We introduce the probably approximately correct (PAC) version of the problem of {Battling-bandits} with the Plackett-Luce (PL) model -- an online learning framework where in each trial, the learner chooses a subset of $k \le n$ arms from a pool of fixed set of $n$ arms, and subsequently observes a stochastic feedback indicating preference information over the items in the chosen subset; e.g., the most preferred item or ranking of the top $m$ most preferred items etc. The objective is to recover an `approximate-best' item of the underlying PL model with high probability. This framework is motivated by practical settings such as recommendation systems and information retrieval, where it is easier and more efficient to collect relative feedback for multiple arms at once. Our framework can be seen as a generalization of the well-studied PAC-{Dueling-Bandit} problem over set of $n$ arms. We propose two different feedback models: just the winner information (WI), and ranking of top-$m$ items (TR), for any $2\le m \le k$. We show that with just the winner information (WI), one cannot recover the `approximate-best' item with sample complexity lesser than $\Omega\bigg( \frac{n}{\epsilon^2} \ln \frac{1}{\delta}\bigg)$, which is independent of $k$, and same as the one required for standard dueling bandit setting ($k=2$). However with top-$m$ ranking (TR) feedback, our lower analysis proves an improved sample complexity guarantee of $\Omega\bigg( \frac{n}{m\epsilon^2} \ln \frac{1}{\delta}\bigg)$, which shows a relative improvement of $\frac{1}{m}$ factor compared to WI feedback, rightfully justifying the additional information gain due to the knowledge of ranking of topmost $m$ items. We also provide algorithms for each of the above feedback models, our theoretical analyses proves the {optimality} of their sample complexities which matches the derived lower bounds (upto logarithmic factors).

Related articles: Most relevant | Search more
arXiv:1802.04350 [cs.LG] (Published 2018-02-12)
On the Sample Complexity of Learning from a Sequence of Experiments
arXiv:1905.12624 [cs.LG] (Published 2019-05-28)
Combinatorial Bandits with Full-Bandit Feedback: Sample Complexity and Regret Minimization
arXiv:2002.10021 [cs.LG] (Published 2020-02-24)
How Transferable are the Representations Learned by Deep Q Agents?