arXiv Analytics

Sign in

arXiv:1612.07307 [cs.LG]AbstractReferencesReviewsResources

Loss is its own Reward: Self-Supervision for Reinforcement Learning

Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell

Published 2016-12-21Version 1

Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.

Related articles: Most relevant | Search more
arXiv:2106.09171 [cs.LG] (Published 2021-06-16)
LiRA: Learning Visual Speech Representations from Audio through Self-supervision
arXiv:2311.11108 [cs.LG] (Published 2023-11-18)
Auxiliary Losses for Learning Generalizable Concept-based Models
arXiv:1909.11825 [cs.LG] (Published 2019-09-26)
Unsupervised Domain Adaptation through Self-Supervision