Real-Time Visual Localization System in Changing and Challenging Environments via Visual Place Recognition
Abstract
Localization is one of the fundamental capabilities to guarantee reliable robot autonomy. Many excellent Visual-Inertial and LiDAR-based algorithms have been developed to solve the localization problem. However, deploying these methods on a real-time portable device is challenging due to high computing power and high-cost sensors (LiDAR). Another option would be Visual Place Recognition(VPR). Compared to Visual-Inertial and LiDAR-based algorithms, VPR only needs to establish relationships between observed and visited places, reducing the computing power. And it mainly uses the camera as its input sensor, which is much cheaper than LiDAR. However, most current VPR-related works do not provide an efficient and robust localization pipeline. Only a few work on VPR-based methods in real-time with portable devices in non-challenging environments. To cover the gap in the VPR field, we present a real-time VPR-based localization system in changing and challenging environments.
This pipeline mainly utilizes the concept of images sequences matching. We also choose to use the omnidirectional camera for visual inputs. Compared to a pinhole camera or a fisheye camera, an omnidirectional camera provides a broader field of view, thus increasing the performance and robustness of our proposed localization pipeline. Our pipeline can also detect an agent's off-path status and re-localize the agent once he/she is back on the path. We build a mapping rover robot with an omnidirectional camera, LiDAR, Inertial Measurement Unit(IMU), and a localization helmet with an omnidirectional camera and IMU as the portable device. Furthermore, we gather a campus-scale dataset with omnidirectional visual inputs and LiDAR inputs on ten different trajectories for eight repeated times under different illuminations and viewpoints.
We show our pipeline's quantitative performance by evaluating our collected dataset against the ground truth. We then show the qualitative performance by doing real-time demo with our robot at Carnegie Mellon University. To show the robustness and generality of our pipeline, we tested it in different changing and challenging environments, including outdoor, long, and narrow indoor corridors and narrow and dark indoor confined spaces.
BibTeX
@mastersthesis{Ge-2022-132525,author = {Ruohai Ge},
title = {Real-Time Visual Localization System in Changing and Challenging Environments via Visual Place Recognition},
year = {2022},
month = {July},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-22-31},
keywords = {Localization , Visual Place Recognition, changing and challenging environments, images sequences matching, VPR dataset},
}