Vision and Learning for Deliberative Monocular Cluttered Flight - Robotics Institute Carnegie Mellon University

Vision and Learning for Deliberative Monocular Cluttered Flight

Debadeepta Dey, Kumar Shaurya Shankar, Sam Zeng, Rupesh Mehta, M. Talha Agcayazi, Christopher Eriksen, Shreyansh Daftry, Martial Hebert, and J. Andrew (Drew) Bagnell
Conference Paper, Proceedings of 10th International Conference on Field and Service Robotics (FSR '15), pp. 391 - 409, June, 2015

Abstract

Cameras provide a rich source of information while being passive, cheap and lightweight for small Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. Two key contributions make this possible: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with an off-the-shelf quadrotor. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar.

BibTeX

@conference{Dey-2015-5970,
author = {Debadeepta Dey and Kumar Shaurya Shankar and Sam Zeng and Rupesh Mehta and M. Talha Agcayazi and Christopher Eriksen and Shreyansh Daftry and Martial Hebert and J. Andrew (Drew) Bagnell},
title = {Vision and Learning for Deliberative Monocular Cluttered Flight},
booktitle = {Proceedings of 10th International Conference on Field and Service Robotics (FSR '15)},
year = {2015},
month = {June},
pages = {391 - 409},
}