arXiv Analytics

Sign in

arXiv:2210.08323 [cs.LG]AbstractReferencesReviewsResources

A Policy-Guided Imitation Approach for Offline Reinforcement Learning

Haoran Xu, Li Jiang, Jianxiong Li, Xianyuan Zhan

Published 2022-10-15Version 1

Offline reinforcement learning (RL) methods can generally be categorized into two types: RL-based and Imitation-based. RL-based methods could in principle enjoy out-of-distribution generalization but suffer from erroneous off-policy evaluation. Imitation-based methods avoid off-policy evaluation but are too conservative to surpass the dataset. In this study, we propose an alternative approach, inheriting the training stability of imitation-style methods while still allowing logical out-of-distribution generalization. We decompose the conventional reward-maximizing policy in offline RL into a guide-policy and an execute-policy. During training, the guide-poicy and execute-policy are learned using only data from the dataset, in a supervised and decoupled manner. During evaluation, the guide-policy guides the execute-policy by telling where it should go so that the reward can be maximized, serving as the \textit{Prophet}. By doing so, our algorithm allows \textit{state-compositionality} from the dataset, rather than \textit{action-compositionality} conducted in prior imitation-style methods. We dumb this new approach Policy-guided Offline RL (\texttt{POR}). \texttt{POR} demonstrates the state-of-the-art performance on D4RL, a standard benchmark for offline RL. We also highlight the benefits of \texttt{POR} in terms of improving with supplementary suboptimal data and easily adapting to new tasks by only changing the guide-poicy.

Related articles: Most relevant | Search more
arXiv:2012.11547 [cs.LG] (Published 2020-12-21)
Offline Reinforcement Learning from Images with Latent Space Models
arXiv:2210.08642 [cs.LG] (Published 2022-10-16)
Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data
arXiv:2111.10919 [cs.LG] (Published 2021-11-21, updated 2022-08-30)
Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation