Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences - Robotics Institute Carnegie Mellon University

Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences

Huai Yu, Weikun Zhen, Wen Yang, Ji Zhang, and Sebastian Scherer
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4588 - 4594, October, 2020

Abstract

Light-weight camera localization in existing maps is essential for vision-based navigation. Currently, visual and visual-inertial odometry (VO&VIO) techniques are well-developed for state estimation but with inevitable accumulated drifts and pose jumps upon loop closure. To overcome these problems, we propose an efficient monocular camera localization method in prior LiDAR maps using direct 2D-3D line correspondences. To handle the appearance differences and modality gaps between LiDAR point clouds and images, geometric 3D lines are extracted offline from LiDAR maps while robust 2D lines are extracted online from video sequences. With the pose prediction from VIO, we can efficiently obtain coarse 2D-3D line correspondences. Then the camera poses and 2D-3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers. Experimental results on the EurocMav dataset and our collected dataset demonstrate that the proposed method can efficiently estimate camera poses without accumulated drifts or pose jumps in structured environments.

BibTeX

@conference{Yu-2020-124339,
author = {Huai Yu and Weikun Zhen and Wen Yang and Ji Zhang and Sebastian Scherer},
title = {Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2020},
month = {October},
pages = {4588 - 4594},
publisher = {IEEE/RSJ},
keywords = {Camera localization; 2D-3D line correspondences; LiDAR maps},
}