arXiv Analytics

Sign in

arXiv:2201.01902 [cs.LG]AbstractReferencesReviewsResources

Gaussian Imagination in Bandit Learning

Yueyang Liu, Adithya M. Devraj, Benjamin Van Roy, Kuang Xu

Published 2022-01-06, updated 2022-01-30Version 2

Assuming distributions are Gaussian often facilitates computations that are otherwise intractable. We study the performance of an agent that attains a bounded information ratio with respect to a bandit environment with a Gaussian prior distribution and a Gaussian likelihood function when applied instead to a Bernoulli bandit. Relative to an information-theoretic bound on the Bayesian regret the agent would incur when interacting with the Gaussian bandit, we bound the increase in regret when the agent interacts with the Bernoulli bandit. If the Gaussian prior distribution and likelihood function are sufficiently diffuse, this increase grows at a rate which is at most linear in the square-root of the time horizon, and thus the per-timestep increase vanishes. Our results formalize the folklore that so-called Bayesian agents remain effective when instantiated with diffuse misspecified distributions.

Related articles: Most relevant | Search more
arXiv:1802.05693 [cs.LG] (Published 2018-02-15)
Bandit Learning with Positive Externalities
arXiv:1907.01287 [cs.LG] (Published 2019-07-02)
Bandit Learning Through Biased Maximum Likelihood Estimation
arXiv:2012.07348 [cs.LG] (Published 2020-12-14, updated 2020-12-31)
Bandit Learning in Decentralized Matching Markets