arXiv Analytics

Sign in

arXiv:2210.00701 [cs.LG]AbstractReferencesReviewsResources

Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation

Dan Qiao, Yu-Xiang Wang

Published 2022-10-03Version 1

We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ and planning horizon $H$, we propose a new algorithm that collects at most $\widetilde{O}(\frac{d^2H^5}{\epsilon^2})$ trajectories within $H$ deployments to identify $\epsilon$-optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal $d$ dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest. Lastly, we analyze the related problem of regret minimization in low-adaptive RL and provide information-theoretic lower bounds for switching cost and batch complexity.

Related articles: Most relevant | Search more
arXiv:2006.11274 [cs.LG] (Published 2020-06-19)
On Reward-Free Reinforcement Learning with Linear Function Approximation
arXiv:2406.11686 [cs.LG] (Published 2024-06-17, updated 2024-06-18)
The Role of Inherent Bellman Error in Offline Reinforcement Learning with Linear Function Approximation
arXiv:2310.18919 [cs.LG] (Published 2023-10-29)
Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation