arXiv Analytics

Sign in

arXiv:1903.06151 [cs.LG]AbstractReferencesReviewsResources

Deep Reinforcement Learning with Feedback-based Exploration

Jan Scholten, Daan Wout, Carlos Celemin, Jens Kober

Published 2019-03-14Version 1

Deep Reinforcement Learning has enabled the control of increasingly complex and high-dimensional problems. However, the need of vast amounts of data before reasonable performance is attained prevents its widespread application. We employ binary corrective feedback as a general and intuitive manner to incorporate human intuition and domain knowledge in model-free machine learning. The uncertainty in the policy and the corrective feedback is combined directly in the action space as probabilistic conditional exploration. As a result, the greatest part of the otherwise ignorant learning process can be avoided. We demonstrate the proposed method, Predictive Probabilistic Merging of Policies (PPMP), in combination with DDPG. In experiments on continuous control problems of the OpenAI Gym, we achieve drastic improvements in sample efficiency, final performance, and robustness to erroneous feedback, both for human and synthetic feedback. Additionally, we show solutions beyond the demonstrated knowledge.

Related articles: Most relevant | Search more
arXiv:1803.11115 [cs.LG] (Published 2018-03-29)
Deep Reinforcement Learning for Traffic Light Control in Vehicular Networks
arXiv:1805.11088 [cs.LG] (Published 2018-05-26)
Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation
arXiv:1806.01175 [cs.LG] (Published 2018-06-04)
TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning