Resource-Constrained State Estimation with Multi-Modal Sensing - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

September

20
Fri
John Yao Robotics Institute,
Carnegie Mellon University
Friday, September 20
10:00 am to 11:00 am
GHC 4405
Resource-Constrained State Estimation with Multi-Modal Sensing

Abstract:
Accurate and reliable state estimation is essential for safe mobile robot operation in real-world environments because ego-motion estimates are required by many critical autonomy functions such as control, planning, and mapping. Computing accurate state estimates depends on the physical characteristics of the environment, the selection of suitable sensors to capture that information, and the availability of compute to perform inference on sensor observations. Because environment characteristics cannot be controlled in typical field operation scenarios, careful sensor selection and efficient fusion algorithms are required to achieve accurate, high-rate state estimation on mobile robots.

Visual-inertial odometry is well-suited to onboard state estimation in environments that have rich visual texture. As with other vision-based methods, it performs poorly when operating on images with non-ideal characteristics such as low visual texture, dynamic objects, or far away scenes. Common methods to mitigate these problems include introducing visual observations from different fields of view and using depth observations. However, processing additional sources and types of observations requires more computation, which is extremely limited on size, weight, and power constrained robots such as micro aerial vehicles. Therefore, there is a need to reduce the computational burden associated with using more observations to increase accuracy through efficient selection of useful observations from multiple heterogeneous sensors.

In this thesis, we propose an optimization-based state estimator that fuses observations from multiple heterogeneous sensors to achieve reliable performance in scenarios that are challenging for typical minimal sensing configurations. The work completed thus far demonstrates a baseline monocular sparse visual-inertial odometry algorithm using fixed-lag smoothing augmented with improvements in covariance estimation and initialization from rest. This baseline state estimator has been fully implemented onboard an aerial robot and used for closed-loop control in a number of different indoor and outdoor environments. The proposed work aims to improve state estimation performance in non-ideal sensing conditions by (1) extending the visual-inertial odometry framework to support multiple asynchronous cameras with disjoint fields of view, (2) leveraging depth observations to boost performance in visually degraded environments, and (3) developing methods to allocate limited computational resources to process exteroceptive observations that are more informative for estimation. The proposed methods will be evaluated onboard an aerial robot in visually challenging environments.

More Information

Thesis Committee Members:
Nathan Michael, Chair
Red Whittaker
Michael Kaess
Hatem Alismail, Uber ATG