An MDP-based approximation method for goal constrained multi-mav planning under action uncertainty
Abstract
This paper presents a fast approximate multi-agent decision theoretic planning method extended from the well-known Markov Decision Process (MDP). Our objective is to plan motions for a team of homogeneous micro air vehicles (MAVs) toward a set of goals, such that each MAV at any state at any moment follows an action policy toward a unique goal, while considering action uncertainty. We pursue an efficient formulation by first considering a deterministic abstraction of the stochastic system based on approximate initial paths. These deterministic and decoupled sub-problems are converted to the stochastic domain and improved by individual agents or a subset of agents. The resulting decoupled formulation requires processing of a partial state space and enables online operation given applications with emerging tasks.
BibTeX
@conference{Liu-2016-120131,author = {L. Liu and N. Michael},
title = {An MDP-based approximation method for goal constrained multi-mav planning under action uncertainty},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2016},
month = {May},
pages = {56 - 62},
}