Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments - Robotics Institute Carnegie Mellon University

Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments

Shichao Yang, Yu Song, Michael Kaess, and Sebastian Scherer
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1222 - 1229, October, 2016

Abstract

Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.

BibTeX

@conference{Yang-2016-5600,
author = {Shichao Yang and Yu Song and Michael Kaess and Sebastian Scherer},
title = {Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2016},
month = {October},
pages = {1222 - 1229},
}