Super Odometry: A Robust LiDAR-Visual-Inertial Estimator for Challenging Environments
Abstract
We propose Super Odometry, a high-precision multi-modal sensor fusion framework, providing a simple but effective way to fuse multiple sensors such as LiDAR, camera, and IMU sensors and achieve robust state estimation in perceptually-degraded environments. Different from traditional sensor-fusion methods, Super Odometry employs an IMU-centric data processing pipeline, which combines the advantages of loosely coupled methods with tightly coupled methods and recovers motion in a coarse-to-fine manner. The proposed framework is composed of three parts: IMU odometry, Visual-inertial odometry, and LiDAR-inertial odometry. The Visual-inertial odometry and LiDAR-inertial odometry provide the pose prior to constrain the IMU bias and receive the motion prediction from IMU odometry. To ensure high performance in real-time, we apply a dynamic octree that only consumes 10% of the running time compared with a static KD-tree. The proposed system was deployed on drones and ground robots, as part of Team Explorer's effort to the DARPA Subterranean Challenge where the team won 1st and 2nd place in the Tunnel and Urban Circuits, respectively. To benefit the entire robotics community, we also release the SubT-MRS dataset, which is the first real-world dataset that specifically addresses failure scenarios of SLAM by incorporating a variety of degraded conditions, multiple robotic platforms, and diverse sets of multi-modal sensors.
BibTeX
@mastersthesis{Zhao-2024-141996,author = {Shibo Zhao},
title = {Super Odometry: A Robust LiDAR-Visual-Inertial Estimator for Challenging Environments},
year = {2024},
month = {July},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-24-49},
keywords = {Robust SLAM; Sensor Fusion; 3D Reconstruction; Sensor Degradation},
}