PRIMAL: Pathfinding via Reinforcement and Imitation Multi-Agent Learning
Abstract
Multi-agent path finding (MAPF) is an essential component of many large-scale, real-world robot deployments, from aerial swarms to warehouse automation. However, despite the community's continued efforts, most state-of-the-art MAPF planners still rely on centralized planning and scale poorly past a few hundred agents. Such planning approaches are maladapted to real-world deployments, where noise and uncertainty often require paths be recomputed online, which is impossible when planning times are in seconds to minutes. We present PRIMAL, a novel framework for MAPF that combines reinforcement and imitation learning to teach fully-decentralized policies, where agents reactively plan paths online in a partially-observable world while exhibiting implicit coordination. This framework extends our previous work on distributed learning of collaborative policies by introducing demonstrations of an expert MAPF planner during training, as well as careful reward shaping and environment sampling. Once learned, the resulting policy can be copied onto any number of agents and naturally scales to different team sizes and world dimensions. We present results on randomized worlds with up to 1024 agents and compare success rates against stateof- the-art MAPF planners. Finally, we experimentally validate the learned policies in a hybrid simulation of a factory mockup, involving both real-world and simulated robots.
BibTeX
@article{Sartoretti-2019-119943,author = {Guillaume Adrien Sartoretti and Justin Kerr and Yunfei Shi and Glenn Wagner and T. K. Satish Kumar and Sven Koenig and Howie Choset},
title = {PRIMAL: Pathfinding via Reinforcement and Imitation Multi-Agent Learning},
journal = {IEEE Robotics and Automation Letters},
year = {2019},
month = {July},
volume = {4},
number = {3},
pages = {2378 - 2385},
}