arXiv Analytics

Sign in

arXiv:2302.04452 [cs.LG]AbstractReferencesReviewsResources

An Information-Theoretic Analysis of Nonstationary Bandit Learning

Seungki Min, Daniel Russo

Published 2023-02-09Version 1

In nonstationary bandit learning problems, the decision-maker must continually gather information and adapt their action selection as the latent state of the environment evolves. In each time period, some latent optimal action maximizes expected reward under the environment state. We view the optimal action sequence as a stochastic process, and take an information-theoretic approach to analyze attainable performance. We bound limiting per-period regret in terms of the entropy rate of the optimal action process. The bound applies to a wide array of problems studied in the literature and reflects the problem's information structure through its information-ratio.

Related articles: Most relevant | Search more
arXiv:1403.5341 [cs.LG] (Published 2014-03-21, updated 2015-06-08)
An Information-Theoretic Analysis of Thompson Sampling
arXiv:2207.08735 [cs.LG] (Published 2022-07-18)
An Information-Theoretic Analysis of Bayesian Reinforcement Learning
arXiv:2005.08697 [cs.LG] (Published 2020-05-18)
Information-theoretic analysis for transfer learning