arXiv Analytics

Sign in

arXiv:2101.05265 [cs.LG]AbstractReferencesReviewsResources

Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning

Rishabh Agarwal, Marlos C. Machado, Pablo Samuel Castro, Marc G. Bellemare

Published 2021-01-13Version 1

Reinforcement learning methods trained on few environments rarely learn policies that generalize to unseen environments. To improve generalization, we incorporate the inherent sequential structure in reinforcement learning into the representation learning process. This approach is orthogonal to recent approaches, which rarely exploit this structure explicitly. Specifically, we introduce a theoretically motivated policy similarity metric (PSM) for measuring behavioral similarity between states. PSM assigns high similarity to states for which the optimal policies in those states as well as in future states are similar. We also present a contrastive representation learning procedure to embed any state similarity metric, which we instantiate with PSM to obtain policy similarity embeddings (PSEs). We demonstrate that PSEs improve generalization on diverse benchmarks, including LQR with spurious correlations, a jumping task from pixels, and Distracting DM Control Suite.

Comments: Accepted at ICLR 2021 (Spotlight). Website: https://agarwl.github.io/pse
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:2011.00517 [cs.LG] (Published 2020-11-01)
Ask Your Humans: Using Human Instructions to Improve Generalization in Reinforcement Learning
arXiv:2107.06277 [cs.LG] (Published 2021-07-13)
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
arXiv:2011.05348 [cs.LG] (Published 2020-11-10)
SALR: Sharpness-aware Learning Rates for Improved Generalization