Loading Events

MSR Speaking Qualifier

May

5
Thu
Sourish Ghosh Robotics Institute,
Carnegie Mellon University
Thursday, May 5
10:00 am to 11:00 am
NSH 4305
Vision-based Aircraft Detection and Tracking for Detect-and-Avoid

Abstract:
Detect-and-Avoid (DAA) capabilities are critical for autonomous operations of small unmanned aircraft systems (sUAS). Traditionally DAA systems for large aircraft have been ground and radar-based. Due to the size, weight, and power (SWaP) constraints of sUAS, current DAA systems rely mainly on vision-based sensors and ADS-B (Automatic Dependent Surveillance-Broadcast) transponders. However, not all flying objects have transponders and therefore a vision-based DAA capability needs to exist for safe low-altitude autonomous flight.

In this work, we present the vision-based DAA problem, particularly the problem of aircraft detection and tracking. At long distances, the visual appearance of planes and helicopters is usually tiny in the context of the whole image, and this results in the problem of detecting extremely tiny objects in high-resolution videos taken from a moving camera. Historically, this problem has been tackled using a multi-stage image processing pipeline involving: (1) ego-motion estimation, (2) background/foreground detection using morphological operations, and (3) tracking using temporal filtering. With the advent of deep learning, modern approaches rely on learning-based object detection methods. Traditional object detection approaches, however, typically do not scale well for this task due to real-time performance constraints.

Our approach to solving this problem follows a two-stage pipeline: (1) ego-motion estimation, and (2) detection and tracking. Both of these stages are fully convolution neural networks that can scale to large resolution inputs. They are trained on a labeled dataset released by Amazon Prime Air containing 3.3M images of airplanes, helicopters, drones, and other flying objects. We also developed our own aircraft data collection systems and designed a custom vision-based DAA payload for in-flight encounters. Through empirical evaluation of real-world data, our approach is compared with two baseline detection and tracking architectures and is shown to be superior. Analyzing our quantitative results in the context of a well-known DAA industry standard (ASTM F3442/F3442M – 20) we also show that the proposed method can satisfy the visual DAA requirements for certain classes of unmanned aircraft with a cruise speed of 60-90kts, minimum turn rate of 20deg/s, and a minimum climb rate of 250ft/min.

Committee:
Prof. Sebastian Scherer (chair)
Prof. Kris Kitani
Cherie Ho