Camera and LIDAR Fusion for Mapping of Actively Illuminated Subterranean Voids - Robotics Institute Carnegie Mellon University

Camera and LIDAR Fusion for Mapping of Actively Illuminated Subterranean Voids

Conference Paper, Proceedings of 7th International Conference on Field and Service Robotics (FSR '09), pp. 421 - 430, July, 2009

Abstract

A method is developed that improves the accuracy of super-resolution range maps over interpolation by fusing actively illuminated HDR camera imagery with LIDAR data in dark subterranean environments. The key approach is shape recovery from estimation of the illumination function and integration in a Markov Random Field (MRF) framework. A virtual reconstruction using data collected from the Bruceton Research Mine is presented.

BibTeX

@conference{Wong-2009-122571,
author = {Uland Wong and Ben Garney and Warren Whittaker and Red Whittaker},
title = {Camera and LIDAR Fusion for Mapping of Actively Illuminated Subterranean Voids},
booktitle = {Proceedings of 7th International Conference on Field and Service Robotics (FSR '09)},
year = {2009},
month = {July},
pages = {421 - 430},
}