Mapping the MIT Stata Center: Large-scale Integrated Visual and RGB-D SLAM
Abstract
This paper describes progress towards creating an integrated large-scale visual and RGB-D mapping and localization system to operate in the MIT Stata Center. The output of a real-time, temporally scalable 6-DOF visual SLAM system is used to generate low fidelity maps that are used by the Kinect Monte Carlo Localization (KMCL) algorithm. This localization algorithm can track the camera pose during aggressive motion and can aid in recovery from visual odometry failures. The localization algorithm uses dense depth information to track its location in the map, which is less sensitive to large viewpoint changes than feature-based approaches, e.g. traversing in opposite direction up and down a hallway. The low fidelity map also makes the system more resilient to clutter and small changes in the environment. The integration of the localization algorithm with the mapping algorithm enables the system to operate in novel environments and allows for robust navigation through the mapped area—even under aggressive motion. A major part of this project has been the collection of a large dataset of the ten-floor MIT Stata Center with a PR2 robot, which currently consists of approximately 40 kilometers of distance traveled. This paper describes ongoing efforts to obtain centimeter-level ground-truth for the robot motion, using prior building models.
BibTeX
@workshop{Fallon-2012-7555,author = {Maurice F. Fallon and Hordur Johannsson and Michael Kaess and David M. Rosen and Elias Muggler and John J. Leonard},
title = {Mapping the MIT Stata Center: Large-scale Integrated Visual and RGB-D SLAM},
booktitle = {Proceedings of RSS '12 Workshop on RGB-D: Advanced Reasoning with Depth Cameras},
year = {2012},
month = {July},
}