Resource-Constrained State Estimation with Multi-Modal Sensing - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Defense

April

10
Fri
John Yao Robotics Institute,
Carnegie Mellon University
Friday, April 10
10:00 am to 11:00 am
Resource-Constrained State Estimation with Multi-Modal Sensing

Zoom Link

Accurate and reliable state estimation is essential for safe mobile robot operation in real-world environments because ego-motion estimates are required by many critical autonomy functions such as control, planning, and mapping. Computing accurate state estimates depends on the physical characteristics of the environment, the selection of suitable sensors to capture that information, and the availability of compute to perform inference on sensor observations. Because environment characteristics cannot be controlled in typical field operation scenarios, careful sensor selection and efficient fusion algorithms are required to achieve accurate, high-rate state estimation on mobile robots.

Visual-inertial odometry is well-suited to onboard state estimation in environments that have rich visual texture. As with other vision-based methods, it performs poorly when operating on images with non-ideal characteristics such as low visual texture, dynamic objects, or far away scenes. Common methods to mitigate these problems include introducing visual observations from different fields of view and using depth observations. However, processing additional sources and types of observations requires more computation, which is extremely limited on size, weight, and power constrained robots such as micro aerial vehicles. Therefore, there is a need to reduce the computational burden associated with using more observations to increase accuracy through efficient selection of useful observations from multiple heterogeneous sensors.

In this thesis, we propose an optimization-based state estimator that fuses observations from multiple heterogeneous sensors to achieve reliable performance in scenarios that are challenging for typical minimal sensing configurations. The foundation is a baseline monocular sparse visual-inertial odometry algorithm using fixed-lag smoothing augmented with improvements in covariance estimation and initialization from rest. We extend this formulation to improve state estimation performance in non-ideal sensing conditions by (1) modifying the visual-inertial odometry framework to support multiple asynchronous cameras with disjoint fields of view, (2) leveraging depth observations to boost performance in visually degraded environments, and (3) developing methods to allocate limited computational resources to process exteroceptive observations that are more informative for estimation. The proposed methods are evaluated both in real time onboard an aerial robot flying in a variety of challenging environments, as well as in postprocessing on datasets collected using the aerial robot.

More Information

Thesis Committee Members:
Nathan Michael, Chair
Red Whittaker
Michael Kaess
Hatem Alismail, Uber ATG