arXiv Analytics

Sign in

arXiv:1802.05693 [cs.LG]AbstractReferencesReviewsResources

Bandit Learning with Positive Externalities

Virag Shah, Jose Blanchet, Ramesh Johari

Published 2018-02-15Version 1

Many platforms are characterized by the fact that future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit {\em positive externalities}. We study multiarmed bandit (MAB) problems with positive externalities. Our model has a finite number of arms and users are distinguished by the arm(s) they prefer. We model positive externalities by assuming that the preferred arms of future arrivals are self-reinforcing based on the experiences of past users. We show that classical algorithms such as UCB which are optimal in the classical MAB setting may even exhibit linear regret in the context of positive externalities. We provide an algorithm which achieves optimal regret and show that such optimal regret exhibits substantially different structure from that observed in the standard MAB setting.

Related articles: Most relevant | Search more
arXiv:1907.01287 [cs.LG] (Published 2019-07-02)
Bandit Learning Through Biased Maximum Likelihood Estimation
arXiv:2201.01902 [cs.LG] (Published 2022-01-06, updated 2022-01-30)
Gaussian Imagination in Bandit Learning
arXiv:2012.07348 [cs.LG] (Published 2020-12-14, updated 2020-12-31)
Bandit Learning in Decentralized Matching Markets