arXiv Analytics

Sign in

arXiv:2112.07066 [cs.LG]AbstractReferencesReviewsResources

Continual Learning In Environments With Polynomial Mixing Times

Matthew Riemer, Sharath Chandra Raparthy, Ignacio Cases, Gopeshh Subbaraj, Maximilian Puelma Touzel, Irina Rish

Published 2021-12-13, updated 2022-10-13Version 2

The mixing time of the Markov chain induced by a policy limits performance in real-world continual learning scenarios. Yet, the effect of mixing times on learning in continual reinforcement learning (RL) remains underexplored. In this paper, we characterize problems that are of long-term interest to the development of continual RL, which we call scalable MDPs, through the lens of mixing times. In particular, we theoretically establish that scalable MDPs have mixing times that scale polynomially with the size of the problem. We go on to demonstrate that polynomial mixing times present significant difficulties for existing approaches, which suffer from myopic bias and stale bootstrapped estimates. To validate our theory, we study the empirical scaling behavior of mixing times with respect to the number of tasks and task duration for high performing policies deployed across multiple Atari games. Our analysis demonstrates both that polynomial mixing times do emerge in practice and how their existence may lead to unstable learning behavior like catastrophic forgetting in continual learning settings.

Related articles: Most relevant | Search more
arXiv:2404.19456 [cs.LG] (Published 2024-04-30)
Imitation Learning: A Survey of Learning Methods, Environments and Metrics
arXiv:1907.02057 [cs.LG] (Published 2019-07-03)
Benchmarking Model-Based Reinforcement Learning
Tingwu Wang et al.
arXiv:2306.01306 [cs.LG] (Published 2023-06-02)
Federated Learning Games for Reconfigurable Intelligent Surfaces via Causal Representations