arXiv Analytics

Sign in

arXiv:2008.06043 [cs.LG]AbstractReferencesReviewsResources

Offline Meta-Reinforcement Learning with Advantage Weighting

Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn

Published 2020-08-13Version 1

Massive datasets have proven critical to successfully applying deep learning to real-world problems, catalyzing progress on tasks such as object recognition, speech transcription, and machine translation. In this work, we study an analogous problem within reinforcement learning: can we enable an agent to leverage large, diverse experiences from previous tasks in order to quickly learn a new task? While recent work has shown some promise towards offline reinforcement learning, considerably less work has studied how we might leverage offline behavioral data when transferring to new tasks. To address this gap, we consider the problem setting of offline meta-reinforcement learning. By nature of being offline, algorithms for offline meta-RL can utilize the largest possible pool of training data available, and eliminate potentially unsafe or costly data collection during meta-training. Targeting this setting, we propose Meta-Actor Critic with Advantage Weighting (MACAW), an optimization-based meta-learning algorithm that uses simple, supervised regression objectives for both inner-loop adaptation and outer-loop meta-learning. To our knowledge, MACAW is the first successful combination of gradient-based meta-learning and value-based reinforcement learning. We empirically find that this approach enables fully offline meta-reinforcement learning and achieves notable gains over prior methods in some settings.