arXiv Analytics

Sign in

arXiv:2207.00986 [cs.LG]AbstractReferencesReviewsResources

Stabilizing Off-Policy Deep Reinforcement Learning from Pixels

Edoardo Cetin, Philip J. Ball, Steve Roberts, Oya Celiktutan

Published 2022-07-03Version 1

Off-policy reinforcement learning (RL) from pixel observations is notoriously unstable. As a result, many successful algorithms must combine different domain-specific practices and auxiliary losses to learn meaningful behaviors in complex environments. In this work, we provide novel analysis demonstrating that these instabilities arise from performing temporal-difference learning with a convolutional encoder and low-magnitude rewards. We show that this new visual deadly triad causes unstable training and premature convergence to degenerate solutions, a phenomenon we name catastrophic self-overfitting. Based on our analysis, we propose A-LIX, a method providing adaptive regularization to the encoder's gradients that explicitly prevents the occurrence of catastrophic self-overfitting using a dual objective. By applying A-LIX, we significantly outperform the prior state-of-the-art on the DeepMind Control and Atari 100k benchmarks without any data augmentation or auxiliary losses.

Related articles:
arXiv:1612.07307 [cs.LG] (Published 2016-12-21)
Loss is its own Reward: Self-Supervision for Reinforcement Learning
arXiv:2311.11108 [cs.LG] (Published 2023-11-18)
Auxiliary Losses for Learning Generalizable Concept-based Models
arXiv:1708.06832 [cs.LG] (Published 2017-08-22)
Anytime Neural Networks via Joint Optimization of Auxiliary Losses