arXiv Analytics

Sign in

arXiv:1907.02057 [cs.LG]AbstractReferencesReviewsResources

Benchmarking Model-Based Reinforcement Learning

Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel, Jimmy Ba

Published 2019-07-03Version 1

Model-based reinforcement learning (MBRL) is widely seen as having the potential to be significantly more sample efficient than model-free RL. However, research in model-based RL has not been very standardized. It is fairly common for authors to experiment with self-designed environments, and there are several separate lines of research, which are sometimes closed-sourced or not reproducible. Accordingly, it is an open question how these various existing MBRL algorithms perform relative to each other. To facilitate research in MBRL, in this paper we gather a wide collection of MBRL algorithms and propose over 18 benchmarking environments specially designed for MBRL. We benchmark these algorithms with unified problem settings, including noisy environments. Beyond cataloguing performance, we explore and unify the underlying algorithmic differences across MBRL algorithms. We characterize three key research challenges for future MBRL research: the dynamics bottleneck, the planning horizon dilemma, and the early-termination dilemma. Finally, to maximally facilitate future research on MBRL, we open-source our benchmark in http://www.cs.toronto.edu/~tingwuwang/mbrl.html.

Comments: 8 main pages, 8 figures; 14 appendix pages, 25 figures
Categories: cs.LG, cs.AI, cs.RO, stat.ML
Related articles: Most relevant | Search more
arXiv:2404.19456 [cs.LG] (Published 2024-04-30)
Imitation Learning: A Survey of Learning Methods, Environments and Metrics
arXiv:2307.02066 [cs.LG] (Published 2023-07-05)
Universal Rates for Multiclass Learning
arXiv:2005.06398 [cs.LG] (Published 2020-05-13)
Implicit Regularization in Deep Learning May Not Be Explainable by Norms