arXiv Analytics

Sign in

arXiv:2011.10034 [cs.RO]AbstractReferencesReviewsResources

Decentralized Task and Path Planning for Multi-Robot Systems

Yuxiao Chen, Ugo Rosolia, Aaron D. Ames

Published 2020-11-19Version 1

We consider a multi-robot system with a team of collaborative robots and multiple tasks that emerges over time. We propose a fully decentralized task and path planning (DTPP) framework consisting of a task allocation module and a localized path planning module. Each task is modeled as a Markov Decision Process (MDP) or a Mixed Observed Markov Decision Process (MOMDP) depending on whether full states or partial states are observable. The task allocation module then aims at maximizing the expected pure reward (reward minus cost) of the robotic team. We fuse the Markov model into a factor graph formulation so that the task allocation can be decentrally solved using the max-sum algorithm. Each robot agent follows the optimal policy synthesized for the Markov model and we propose a localized forward dynamic programming scheme that resolves conflicts between agents and avoids collisions. The proposed framework is demonstrated with high fidelity ROS simulations and experiments with multiple ground robots.

Related articles: Most relevant | Search more
arXiv:2011.10488 [cs.RO] (Published 2020-11-20)
Utilizing ROS 1 and the Turtlebot3 in a Multi-Robot System
arXiv:1903.00948 [cs.RO] (Published 2019-03-03)
State-Continuity Approximation of Markov Decision Processes via Finite Element Analysis for Autonomous System Planning
arXiv:1806.06134 [cs.RO] (Published 2018-06-15)
Learning 6-DoF Grasping and Pick-Place Using Attention Focus