arXiv Analytics

Sign in

arXiv:1810.12558 [cs.LG]AbstractReferencesReviewsResources

Relative Importance Sampling For Off-Policy Actor-Critic in Deep Reinforcement Learning

Mahammad Humayoo, Xueqi Cheng

Published 2018-10-30Version 1

Off-policy learning is more unstable compared to on-policy learning in reinforcement learning (RL). One reason for the instability of off-policy learning is a discrepancy between the target ($\pi$) and behavior (b) policy distributions. The discrepancy between $\pi$ and b distributions can be alleviated by employing a smooth variant of the importance sampling (IS), such as the relative importance sampling (RIS). RIS has parameter $\beta\in[0, 1]$ which controls smoothness. To cope with instability, we present the first relative importance sampling-off-policy actor-critic (RIS-Off-PAC) model-free algorithms in RL. In our method, the network yields a target policy (the actor), a value function (the critic) assessing the current policy ($\pi$), and behavior policy. We use action value generated from the behavior policy to train our algorithm rather than from the target policy. We also use deep neural networks to train both actor and critic. We evaluated our algorithm on a number of Open AI Gym benchmark problems and demonstrate better or comparable performance to several state-of-the-art RL baselines.

Related articles: Most relevant | Search more
arXiv:1805.11088 [cs.LG] (Published 2018-05-26)
Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation
arXiv:1805.03359 [cs.LG] (Published 2018-05-09)
Reward Estimation for Variance Reduction in Deep Reinforcement Learning
arXiv:1901.02219 [cs.LG] (Published 2019-01-08)
Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning