'Synthesizing Multi-Robot Policies: From Cooperative Perception to Human-Led Fleet Control’
How are we to orchestrate large teams of robots? How do we distill global goals into local robot policies? And how do we seamlessly integrate human-led fleet control? Machine learning has revolutionized the way in which we address these questions by enabling us to automatically synthesize agent policies from high-level objectives. In this presentation, I first describe how we leverage data-driven approaches to learn interaction strategies that lead to coordinated and cooperative robot behaviors. I will introduce our work on Graph Neural Networks, and show how we use such architectures to learn multi-agent policies through differentiable communications channels. I present experimental results with mobile robots engaged in cooperative perception, formation control, and human-led path-finding; I also show how the methods scale to very large-scale systems, and how they are capable of modeling complex physical interactions in close-proximity flight with multiple quadrotors.