arXiv Analytics

Sign in

arXiv:1203.3481 [cs.LG]AbstractReferencesReviewsResources

Real-Time Scheduling via Reinforcement Learning

Robert Glaubius, Terry Tidwell, Christopher Gill, William D. Smart

Published 2012-03-15Version 1

Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem's structure to guarantee efficient learning.

Comments: Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1809.10679 [cs.LG] (Published 2018-09-27)
Definition and evaluation of model-free coordination of electrical vehicle charging with reinforcement learning
arXiv:1803.00590 [cs.LG] (Published 2018-03-01)
Hierarchical Imitation and Reinforcement Learning
arXiv:1809.01560 [cs.LG] (Published 2018-09-05)
Reinforcement Learning under Threats