arXiv Analytics

Sign in

arXiv:2004.02860 [cs.LG]AbstractReferencesReviewsResources

Weakly-Supervised Reinforcement Learning for Controllable Behavior

Lisa Lee, Benjamin Eysenbach, Ruslan Salakhutdinov, Shixiang, Gu, Chelsea Finn

Published 2020-04-06Version 1

Reinforcement learning (RL) is a powerful framework for learning to take actions to solve tasks. However, in many settings, an agent must winnow down the inconceivably large space of all possible tasks to the single task that it is currently being asked to solve. Can we instead constrain the space of tasks to those that are semantically meaningful? In this work, we introduce a framework for using weak supervision to automatically disentangle this semantically meaningful subspace of tasks from the enormous space of nonsensical "chaff" tasks. We show that this learned subspace enables efficient exploration and provides a representation that captures distance between states. On a variety of challenging, vision-based continuous control problems, our approach leads to substantial performance gains, particularly as the complexity of the environment grows.

Related articles: Most relevant | Search more
arXiv:1805.08356 [cs.LG] (Published 2018-05-22)
Improved Algorithms for Collaborative PAC Learning
arXiv:1911.03731 [cs.LG] (Published 2019-11-09)
Learning Internal Representations
arXiv:2310.06117 [cs.LG] (Published 2023-10-09)
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models