Multi-Sensor Fusion for Robust Simultaneous Localization and Mapping
Abstract
Simultaneous Localization and Mapping (SLAM) consists of the estimation of the state of the robot, and the reconstruction of the surrounding environment simul-taneously. Over the last few decades, numerous state-of-the-art SLAM algorithms are proposed and frequently utilized in the robotics community. However, a SLAM algorithm might be fragile or even fail due to imperfect sensor data, uncontrived environments and hardware failures. Furthermore, many approaches sacrifice loop closure and reduce to odometry, which might suffer from accumulated drift. In ad-dition, most of the SLAM systems only provide metric representations for mapping, which is difficult for high-level robot tasks. In this thesis, we first explore the in-tegration of the monocular camera, a Light Detection and Ranging (Lidar) sensor, and an Inertial Measurement Unit (IMU) to achieve the accurate and robust state estimation. We then propose one loop closure module combining camera and Li-dar’s measurements to correct motion estimation drift. Finally, we propose one 3D semantic occupancy mapping framework integrating monocular vision, Lidar’s mea-surements and our state estimation system. We first test our state estimation system on the publicly available datasets as well as challenging custom collected datasets. Then, a series of experiments are conducted to demonstrate the effectiveness of the loop closure module. Finally, the KITTI odometry dataset is used to demonstrate our 3D semantic occupancy mapping framework. The experimental results indicate that our proposed state estimation system works well in various challenging environments and an accurate large-scale semantic map can be constructed.
BibTeX
@mastersthesis{Li-2019-117077,author = {Cong Li},
title = {Multi-Sensor Fusion for Robust Simultaneous Localization and Mapping},
year = {2019},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-60},
}