arXiv Analytics

Sign in

arXiv:2203.03480 [cs.LG]AbstractReferencesReviewsResources

Reinforcement Learning for Location-Aware Scheduling

Stelios Stavroulakis, Biswa Sengupta

Published 2022-03-07Version 1

Recent techniques in dynamical scheduling and resource management have found applications in warehouse environments due to their ability to organize and prioritize tasks in a higher temporal resolution. The rise of deep reinforcement learning, as a learning paradigm, has enabled decentralized agent populations to discover complex coordination strategies. However, training multiple agents simultaneously introduce many obstacles in training as observation and action spaces become exponentially large. In our work, we experimentally quantify how various aspects of the warehouse environment (e.g., floor plan complexity, information about agents' live location, level of task parallelizability) affect performance and execution priority. To achieve efficiency, we propose a compact representation of the state and action space for location-aware multi-agent systems, wherein each agent has knowledge of only self and task coordinates, hence only partial observability of the underlying Markov Decision Process. Finally, we show how agents trained in certain environments maintain performance in completely unseen settings and also correlate performance degradation with floor plan geometry.

Related articles: Most relevant | Search more
arXiv:1809.09095 [cs.LG] (Published 2018-09-23)
On Reinforcement Learning for Full-length Game of StarCraft
arXiv:1203.3481 [cs.LG] (Published 2012-03-15)
Real-Time Scheduling via Reinforcement Learning
arXiv:1703.00956 [cs.LG] (Published 2017-03-02)
A Laplacian Framework for Option Discovery in Reinforcement Learning