arXiv Analytics

Sign in

arXiv:1801.09624 [cs.LG]AbstractReferencesReviewsResources

Learning the Reward Function for a Misspecified Model

Erik Talvitie

Published 2018-01-29Version 1

In model-based reinforcement learning it is typical to treat the problems of learning the dynamics model and learning the reward function separately. However, when the dynamics model is flawed, it may generate erroneous states that would never occur in the true environment. A reward function trained only to map environment states to rewards (as is typical) would have little guidance in such states. This paper presents a novel error bound that accounts for the reward model's behavior in states sampled from the model. This bound is used to extend the existing Hallucinated DAgger-MC algorithm, which offers theoretical performance guarantees in deterministic MDPs that do not assume a perfect model can be learned. Empirically, this approach to reward learning can yield dramatic improvements in control performance when the dynamics model is flawed.

Related articles: Most relevant | Search more
arXiv:2309.15257 [cs.LG] (Published 2023-09-26)
STARC: A General Framework For Quantifying Differences Between Reward Functions
arXiv:1707.07767 [cs.LG] (Published 2017-07-24)
Bellman Gradient Iteration for Inverse Reinforcement Learning
arXiv:1812.09603 [cs.LG] (Published 2018-12-22)
Search-Guided, Lightly-supervised Training of Structured Prediction Energy Networks