arXiv Analytics

Sign in

arXiv:2106.11960 [cs.LG]AbstractReferencesReviewsResources

Variance-Aware Off-Policy Evaluation with Linear Function Approximation

Yifei Min, Tianhao Wang, Dongruo Zhou, Quanquan Gu

Published 2021-06-22Version 1

We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy. We propose to incorporate the variance information of the value function to improve the sample efficiency of OPE. More specifically, for time-inhomogeneous episodic linear Markov decision processes (MDPs), we propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration. We show that our algorithm achieves a tighter error bound than the best-known result. We also provide a fine-grained characterization of the distribution shift between the behavior policy and the target policy. Extensive numerical experiments corroborate our theory.

Related articles: Most relevant | Search more
arXiv:2103.09847 [cs.LG] (Published 2021-03-17)
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm
arXiv:2002.09516 [cs.LG] (Published 2020-02-21)
Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation
arXiv:1907.05388 [cs.LG] (Published 2019-07-11)
Provably Efficient Reinforcement Learning with Linear Function Approximation