Sensor fusion for fiducial tags: Highly robust pose estimation from single frame RGBD - Robotics Institute Carnegie Mellon University

Sensor fusion for fiducial tags: Highly robust pose estimation from single frame RGBD

Pengju Jin, Pyry K. Matikainen, and Siddhartha S. Srinivasa
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5770 - 5776, September, 2017

Abstract

Although there is an abundance of planar fiducial-marker systems proposed for augmented reality and computer-vision purposes, using them to estimate the pose accurately in robotic applications where collected data are noisy remains a challenge. This is inherently a difficult problem because these fiducial marker systems work solely within the RGB image space and the resolution of cameras on robots is often constrained. As a result, small noise in the image would cause the tag's detection process to produce large pose estimation errors. This paper describes an algorithm that improves the pose estimation accuracy of square fiducial markers in difficult scenes by fusing information from RGB and depth sensors. The algorithm retains the high detection rate and low false positive rate characteristics of fiducial systems while making them much more robust to size, lighting and sensory noise for pose estimation. The improvements make the fiducial tags suitable for robotic tasks requiring high pose accuracy in the real world environment.

BibTeX

@conference{Jin-2017-122665,
author = {Pengju Jin and Pyry K. Matikainen and Siddhartha S. Srinivasa},
title = {Sensor fusion for fiducial tags: Highly robust pose estimation from single frame RGBD},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2017},
month = {September},
pages = {5770 - 5776},
}