Robust Monocular Vision-based Navigation for a Miniature Fixed-Wing Aircraft - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

September

15
Tue
Myung Hwangbo Carnegie Mellon University
Tuesday, September 15
1:00 pm to 12:00 am
Robust Monocular Vision-based Navigation for a Miniature Fixed-Wing Aircraft

Event Location: Newell Simon Hall 1109

Abstract: Recently the operation of unmanned aerial vehicles (UAVs) has expanded from military to civilian applications. Contrary to remote-controlled tasks in a high altitude, low-altitude flight in an urban environment requires a higher level of autonomy to respond to complex and unpredictable situations. Vision-based methods for autonomous navigation have been a promising approach because of multi-layered information delivered by images but their robustness in various situations has been hard to achieve. We propose a series of monocular computer vision algorithms combined with vehicle dynamics and other navigational sensors in GPS-denied environments like an urban canyon. We use a fixed-wing model airplane of 1m wing span as our UAV platform. Because of its small payload and limited communication bandwidth to off-body processors, particular attention is paid to both realtime and robustness at every level of vision processing of low-grade images.


In point-to-point navigation, state estimation is based on the structure-from-motion method (SFM) using natural landmarks under conditions where the captured images have sufficient texture. To cope with the fundamental limits of monocular visual odometry (scale ambiguity and rotation-translation ambiguity), vehicle dynamics and airspeed measurements are incorporated in a Kalman filter framework. More robust estimation is provided from multiple rails of the SFM which are traced in an interweaving fashion. Sturdy input to the SFM is enabled by optical flow computation which is tightly coupled with the IMU. Predictive warping parameters and a high-order motion model enhance the accuracy and life span of KLT feature tracking. We also employ vision-based horizon detection as an absolute attitude sensor which is useful for low-level control of a UAV.


The performance of the proposed method is evaluated in what we call an air-slalom task, where the UAV is expected to pass through multiple gates in the air in a row. It will demonstrate how a fixed-wing UAV confronts its limited agility, which is inferior to other hovercraft types in typical urban operations. To efficiently find a feasible obstacle-free path to a goal, we propose a 3D Dubins heuristic for optimal cost to a goal and use a set of lateral and longitudinal motion primitives interconnecting at trim states in order to reduce the dimension of configuration space. We first demonstrate our visual navigation in our UAV simulator, which can be switched between live and synthetic modes, each including wireless data transmission to a ground station.

Committee:Takeo Kanade, Co-chair

James Kuffner, Co-chair

Sanjiv Singh

Omead Amidi

Randy Beard, Brigham Young University