arXiv Analytics

Sign in

arXiv:2011.10034 [cs.RO]AbstractReferencesReviewsResources

Decentralized Task and Path Planning for Multi-Robot Systems

Yuxiao Chen, Ugo Rosolia, Aaron D. Ames

Published 2020-11-19Version 1

We consider a multi-robot system with a team of collaborative robots and multiple tasks that emerges over time. We propose a fully decentralized task and path planning (DTPP) framework consisting of a task allocation module and a localized path planning module. Each task is modeled as a Markov Decision Process (MDP) or a Mixed Observed Markov Decision Process (MOMDP) depending on whether full states or partial states are observable. The task allocation module then aims at maximizing the expected pure reward (reward minus cost) of the robotic team. We fuse the Markov model into a factor graph formulation so that the task allocation can be decentrally solved using the max-sum algorithm. Each robot agent follows the optimal policy synthesized for the Markov model and we propose a localized forward dynamic programming scheme that resolves conflicts between agents and avoids collisions. The proposed framework is demonstrated with high fidelity ROS simulations and experiments with multiple ground robots.

Related articles: Most relevant | Search more
arXiv:1903.00948 [cs.RO] (Published 2019-03-03)
State-Continuity Approximation of Markov Decision Processes via Finite Element Analysis for Autonomous System Planning
arXiv:2210.11779 [cs.RO] (Published 2022-10-21)
Reaching Through Latent Space: From Joint Statistics to Path Planning in Manipulation
arXiv:2205.14251 [cs.RO] (Published 2022-05-27)
Is it Worth to Reason about Uncertainty in Occupancy Grid Maps during Path Planning?