arXiv Analytics

Sign in

arXiv:1712.06924 [cs.LG]AbstractReferencesReviewsResources

Safe Policy Improvement with Baseline Bootstrapping

Romain Laroche, Paul Trichelair, Layla El Asri

Published 2017-12-19Version 1

A common goal in Reinforcement Learning is to derive a good strategy given a limited batch of data. In this paper, we adopt the safe policy improvement (SPI) approach: we compute a target policy guaranteed to perform at least as well as a given baseline policy. Our SPI strategy, inspired by the knows-what-it-knows paradigms, consists in bootstrapping the target policy with the baseline policy when it does not know. We develop two computationally efficient bootstrapping algorithms, a value-based and a policy-based, both accompanied with theoretical SPI bounds. Three algorithm variants are proposed. We empirically show the literature algorithms limits on a small stochastic gridworld problem, and then demonstrate that our five algorithms not only improve the worst case scenarios, but also the mean performance.

Related articles: Most relevant | Search more
arXiv:1907.05079 [cs.LG] (Published 2019-07-11)
Safe Policy Improvement with Soft Baseline Bootstrapping
arXiv:2010.12645 [cs.LG] (Published 2020-10-23)
Towards Safe Policy Improvement for Non-Stationary MDPs
arXiv:2305.07958 [cs.LG] (Published 2023-05-13)
More for Less: Safe Policy Improvement With Stronger Performance Guarantees