Recent Changes - Search:


Home Page
MAPF Info
MAPF News
Mailing List
Meetings
Publications
Researchers
Benchmarks
Software
Apps
Tutorials
Class Projects

[Internal]

Publication

G. Sartoretti, J. Kerr, Y. Shi, G. Wagner, S. Kumar, S. Koenig and H. Choset. PRIMAL: Pathfinding via Reinforcement and Imitation Multi-Agent Learning. IEEE Robotics and Automation Letters, (in print), 2019.


Abstract: Multi-agent path finding (MAPF) is an essential component of many large-scale, real-world robot deployments, from aerial swarms to warehouse automation. However, despite the community's continued efforts, most state-of-the-art MAPF planners still rely on centralized planning and scale poorly past a few hundred agents. Such planning approaches are maladapted to real-world deployments, where noise and uncertainty often require paths be recomputed online, which is impossible when planning times are in seconds to minutes. We present PRIMAL, a novel framework for MAPF that combines reinforcement and imitation learning to teach fully-decentralized policies, where agents reactively plan paths online in a partially-observable world while exhibiting implicit coordination. This framework extends our previous work on distributed learning of collaborative policies by introducing demonstrations of an expert MAPF planner during training, as well as careful reward shaping and environment sampling. Once learned, the resulting policy can be copied onto any number of agents and naturally scales to different team sizes and world dimensions. We present results on randomized worlds with up to 1024 agents and compare success rates against state-of-the-art MAPF planners. Finally, we experimentally validate the learned policies in a hybrid simulation of a factory mockup, involving both real-world and simulated robots.

Additional videos can be found on the YouTube playlist of the paper.


Download the paper in pdf.

Edit - History - Print - Recent Changes - Search
Page last modified on December 21, 2024, at 07:49 AM