MSR Thesis Talk: Ruohai Ge - Robotics Institute Carnegie Mellon University
Loading Events

MSR Speaking Qualifier

July

20
Wed
Ruohai Ge Robotics Institute,
Carnegie Mellon University
Wednesday, July 20
8:00 am to 9:00 am
NSH 3001
MSR Thesis Talk: Ruohai Ge

Title: Real-Time Visual Localization System in Changing and Challenging Environments via Visual Place Recognition

 

Abstract:

Localization is one of the fundamental capabilities to guarantee reliable robot autonomy. Many excellent Visual-Inertial and LiDAR-based algorithms have been developed to solve the localization problem. However, deploying these methods on a real-time portable device is challenging due to high computing power and high-cost sensors (LiDAR). Another option would be Visual Place Recognition(VPR). Compared to Visual-Inertial and LiDAR-based algorithms, VPR only needs to establish relationships between observed and visited places, reducing the computing power. And it mainly uses the camera as its input sensor, which is much cheaper than LiDAR. However, most current VPR-related works do not provide an efficient and robust localization pipeline. Only a few work on VPR-based methods in real-time with portable devices in non-challenging environments. To cover the gap in the VPR field, we present a real-time VPR-based localization system in changing and challenging environments.

This pipeline mainly utilizes the concept of images sequences matching. We also choose to use the omnidirectional camera for visual inputs. Compared to a pinhole camera or a fisheye camera, an omnidirectional camera provides a broader field of view, thus increasing the performance and robustness of our proposed localization pipeline. Our pipeline can also detect an agent’s off-path status and re-localize the agent once he/she is back on the path. We build a mapping rover robot with an omnidirectional camera, LiDAR, Inertial Measurement Unit(IMU), and a localization helmet with an omnidirectional camera and IMU as the portable device. Furthermore, We gather a campus-scale dataset with omnidirectional visual inputs and LiDAR inputs on ten different trajectories for eight repeated times under different illuminations and viewpoints.

We show our pipeline’s quantitative performance by evaluating our collected dataset against the ground truth. We then show the qualitative performance by doing real-time demos with our robots at Carnegie Mellon University. To show the robustness and generality of our pipeline, we test it in different changing and challenging environments, including outdoor, long and narrow indoor corridors, and narrow and dark indoor confined spaces.

 

Committee:

Prof. Sebastian Scherer (advisor)

Prof. Kris Kitani 

Cherie Ho