Towards Scalable Visual Navigation of Micro Aerial Vehicles
Abstract
Micro Aerial Vehicles (MAVs) have built a formidable resume by making themselves useful in a number of important applications, from disaster scene surveillance and package delivery to robots used in aerial imaging, architecture and construction. The most important benefit of using such lightweight MAVs is that it allows the capability to fly at high speeds in space-constrained environments. While autonomous operations in structured motion-capture systems has has been well studied in general, enabling resource-efficient, persistent navigation methods for long-term autonomy in unstructured and dynamic environments is still an open problem in current robotics research. In this thesis, we take a small step in this direction and present a scalable framework for robust visual navigation of MAVs in the wild. Our first contribution is a toolbox of approaches for perception, planning and control of agile, low-cost MAVs in cluttered urban and natural outdoor environments, based on a monocular camera as the main exteroceptive sensor. In particular, we present novel geometry and data-driven depth estimation methods for nontranslational camera motion and dynamic scenes, where traditional structure from motion (SfM) and Simultaneous Localization and Mapping (SLAM) techniques do not apply, with accuracy comparable to that of a stereo setup. Second, we propose a generic framework for introspection in autonomous robots. As robots aspire for complete autonomy in human-centric environments, accurate situational awareness becomes a critical requirement for verifiable safety standards. We call this self-evaluating capability, to assess how qualified they are at that moment to make a decision, as introspection. Inspired by this, we advocate the need to build systems with the ability to take mission-critical decisions in ambiguous situations and present a failure prediction and recovery framework for perception systems. Finally, our third contribution towards developing a scalable autonomous behavior is a framework for knowledge transfer across robots and environments. We argue that for many learning based robot tasks, it is not possible to obtain training data. The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning. In this work, we develop a similar technique for learning transferable motion policies i.e. solve a learning problem in a target domain by utilizing the training data in a different but related source domain. The proposed algorithms are quantitatively and qualitatively evaluated on a variety of datasets and validated through real-world field experiments. Our experiments demonstrate that our contributions help to build a scalable, accurate, and computationally feasible visual navigation system for micro aerial vehicles in the wild.
BibTeX
@mastersthesis{Daftry-2016-5497,author = {Shreyansh Daftry},
title = {Towards Scalable Visual Navigation of Micro Aerial Vehicles},
year = {2016},
month = {April},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-16-07},
}