arXiv Analytics

Sign in

arXiv:2409.05072 [cs.LG]AbstractReferencesReviewsResources

A General Framework for Clustering and Distribution Matching with Bandit Feedback

Recep Can Yavas, Yuqi Huang, Vincent Y. F. Tan, Jonathan Scarlett

Published 2024-09-08Version 1

We develop a general framework for clustering and distribution matching problems with bandit feedback. We consider a $K$-armed bandit model where some subset of $K$ arms is partitioned into $M$ groups. Within each group, the random variable associated to each arm follows the same distribution on a finite alphabet. At each time step, the decision maker pulls an arm and observes its outcome from the random variable associated to that arm. Subsequent arm pulls depend on the history of arm pulls and their outcomes. The decision maker has no knowledge of the distributions of the arms or the underlying partitions. The task is to devise an online algorithm to learn the underlying partition of arms with the least number of arm pulls on average and with an error probability not exceeding a pre-determined value $\delta$. Several existing problems fall under our general framework, including finding $M$ pairs of arms, odd arm identification, and $M$-ary clustering of $K$ arms belong to our general framework. We derive a non-asymptotic lower bound on the average number of arm pulls for any online algorithm with an error probability not exceeding $\delta$. Furthermore, we develop a computationally-efficient online algorithm based on the Track-and-Stop method and Frank--Wolfe algorithm, and show that the average number of arm pulls of our algorithm asymptotically matches that of the lower bound. Our refined analysis also uncovers a novel bound on the speed at which the average number of arm pulls of our algorithm converges to the fundamental limit as $\delta$ vanishes.

Comments: 22 pages, submitted to Information Theory Transactions in September 2024
Categories: cs.LG, stat.ML
Subjects: 68T05, I.2.6
Related articles: Most relevant | Search more
arXiv:1912.01192 [cs.LG] (Published 2019-12-03)
Learning Adversarial MDPs with Bandit Feedback and Unknown Transition
arXiv:2004.13106 [cs.LG] (Published 2020-04-27)
Learning to Rank in the Position Based Model with Bandit Feedback
arXiv:2003.11940 [cs.LG] (Published 2020-03-25)
A general framework for causal classification