arXiv Analytics

Sign in

arXiv:2106.07046 [cs.LG]AbstractReferencesReviewsResources

Towards Tight Bounds on the Sample Complexity of Average-reward MDPs

Yujia Jin, Aaron Sidford

Published 2021-06-13Version 1

We prove new upper and lower bounds for sample complexity of finding an $\epsilon$-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most $t_\mathrm{mix}$, we provide an algorithm that solves the problem using $\widetilde{O}(t_\mathrm{mix} \epsilon^{-3})$ (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on $t_\mathrm{mix}$ is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.

Related articles: Most relevant | Search more
arXiv:1905.12624 [cs.LG] (Published 2019-05-28)
Combinatorial Bandits with Full-Bandit Feedback: Sample Complexity and Regret Minimization
arXiv:1206.6461 [cs.LG] (Published 2012-06-27)
On the Sample Complexity of Reinforcement Learning with a Generative Model
arXiv:1806.02970 [cs.LG] (Published 2018-06-08)
PAC Ranking from Pairwise and Listwise Queries: Lower Bounds and Upper Bounds