Bridging Text Spotting and SLAM with Junction Features - Robotics Institute Carnegie Mellon University

Bridging Text Spotting and SLAM with Junction Features

Hsueh-Cheng Wang, Chelsea Finn, Liam Paull, Michael Kaess, Ruth Rosenholtz, Seth Teller, and John Leonard
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3701 - 3708, September, 2015

Abstract

Navigating in a previously unknown environment and recognizing naturally occurring text in a scene are two important autonomous capabilities that are typically treated as distinct. However, these two tasks are potentially complementary, (i) scene and pose priors can benefit text spotting, and (ii) the ability to identify and associate text features can benefit navigation accuracy through loop closures. Previous approaches to autonomous text spotting typically require significant training data and are too slow for real-time implementation. In this work, we propose a novel high-level feature descriptor, the "junction", which is particularly well-suited to text representation and is also fast to compute. We show that we are able to improve SLAM through text spotting on datasets collected with a Google Tango, illustrating how location priors enable improved loop closure with text features.

BibTeX

@conference{Wang-2015-6030,
author = {Hsueh-Cheng Wang and Chelsea Finn and Liam Paull and Michael Kaess and Ruth Rosenholtz and Seth Teller and John Leonard},
title = {Bridging Text Spotting and SLAM with Junction Features},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2015},
month = {September},
pages = {3701 - 3708},
}