arXiv Analytics

Sign in

arXiv:1901.02219 [cs.LG]AbstractReferencesReviewsResources

Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning

Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien

Published 2019-01-08Version 1

We consider the problem of detecting out-of-distribution (OOD) samples in deep reinforcement learning. In a value based reinforcement learning setting, we propose to use uncertainty estimation techniques directly on the agent's value estimating neural network to detect OOD samples. The focus of our work lies in analyzing the suitability of approximate Bayesian inference methods and related ensembling techniques that generate uncertainty estimates. Although prior work has shown that dropout-based variational inference techniques and bootstrap-based approaches can be used to model epistemic uncertainty, the suitability for detecting OOD samples in deep reinforcement learning remains an open question. Our results show that uncertainty estimation can be used to differentiate in- from out-of-distribution samples. Over the complete training process of the reinforcement learning agents, bootstrap-based approaches tend to produce more reliable epistemic uncertainty estimates, when compared to dropout-based approaches.

Related articles: Most relevant | Search more
arXiv:1810.12558 [cs.LG] (Published 2018-10-30)
Relative Importance Sampling For Off-Policy Actor-Critic in Deep Reinforcement Learning
arXiv:1805.11088 [cs.LG] (Published 2018-05-26)
Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation
arXiv:1901.01379 [cs.LG] (Published 2019-01-05)
Deep Reinforcement Learning for Imbalanced Classification