arXiv Analytics

Sign in

arXiv:2011.00517 [cs.LG]AbstractReferencesReviewsResources

Ask Your Humans: Using Human Instructions to Improve Generalization in Reinforcement Learning

Valerie Chen, Abhinav Gupta, Kenneth Marino

Published 2020-11-01Version 1

Complex, multi-task problems have proven to be difficult to solve efficiently in a sparse-reward reinforcement learning setting. In order to be sample efficient, multi-task learning requires reuse and sharing of low-level policies. To facilitate the automatic decomposition of hierarchical tasks, we propose the use of step-by-step human demonstrations in the form of natural language instructions and action trajectories. We introduce a dataset of such demonstrations in a crafting-based grid world. Our model consists of a high-level language generator and low-level policy, conditioned on language. We find that human demonstrations help solve the most complex tasks. We also find that incorporating natural language allows the model to generalize to unseen tasks in a zero-shot setting and to learn quickly from a few demonstrations. Generalization is not only reflected in the actions of the agent, but also in the generated natural language instructions in unseen tasks. Our approach also gives our trained agent interpretable behaviors because it is able to generate a sequence of high-level descriptions of its actions.

Related articles: Most relevant | Search more
arXiv:1301.0601 [cs.LG] (Published 2012-12-12)
Reinforcement Learning with Partially Known World Dynamics
arXiv:2011.05348 [cs.LG] (Published 2020-11-10)
SALR: Sharpness-aware Learning Rates for Improved Generalization
arXiv:1306.6189 [cs.LG] (Published 2013-06-26)
Scaling Up Robust MDPs by Reinforcement Learning