arXiv Analytics

Sign in

arXiv:2106.11612 [cs.LG]AbstractReferencesReviewsResources

Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation

Jiafan He, Dongruo Zhou, Quanquan Gu

Published 2021-06-22Version 1

We study reinforcement learning (RL) with linear function approximation. Existing algorithms for this problem only have high-probability regret and/or Probably Approximately Correct (PAC) sample complexity guarantees, which cannot guarantee the convergence to the optimal policy. In this paper, in order to overcome the limitation of existing algorithms, we propose a new algorithm called FLUTE, which enjoys uniform-PAC convergence to the optimal policy with high probability. The uniform-PAC guarantee is the strongest possible guarantee for reinforcement learning in the literature, which can directly imply both PAC and high probability regret bounds, making our algorithm superior to all existing algorithms with linear function approximation. At the core of our algorithm is a novel minimax value function estimator and a multi-level partition scheme to select the training samples from historical observations. Both of these techniques are new and of independent interest.

Related articles: Most relevant | Search more
arXiv:2011.11566 [cs.LG] (Published 2020-11-23)
Logarithmic Regret for Reinforcement Learning with Linear Function Approximation
arXiv:1909.02877 [cs.LG] (Published 2019-09-06)
Gradient Q$(σ, λ)$: A Unified Algorithm with Function Approximation for Reinforcement Learning
arXiv:cs/0306120 [cs.LG] (Published 2003-06-22, updated 2007-03-09)
Reinforcement Learning with Linear Function Approximation and LQ control Converges