Attack-Resilient Submodular Maximization for Multi-Robot Planning

Lifeng Zhou, Virginia Tech

Abstract A major challenge for practical deployments is to make the robots resilient to failures. Robots may be attacked in adversarial scenarios or they may fail due to a variety of reasons. In this talk, I will present planning algorithms for multi-robot, multi-target tracking that are resilient to such failures.
I will present both centralized and distributed resilient algorithms to counter such failures/attacks. The algorithms presented degrade gracefully (in theory and practice) as the number of attacked robots increases. We quantify our algorithms’ approximation performance using a novel notion of curvature for monotone submodular set functions subject to matroid constraints. Finally, I will discuss some recent results on risk-aware joint planning and perception using deep learning.