Camera and LIDAR Fusion for Mapping of Actively Illuminated Subterranean Voids
Conference Paper, Proceedings of 7th International Conference on Field and Service Robotics (FSR '09), pp. 421 - 430, July, 2009
Abstract
A method is developed that improves the accuracy of super-resolution range maps over interpolation by fusing actively illuminated HDR camera imagery with LIDAR data in dark subterranean environments. The key approach is shape recovery from estimation of the illumination function and integration in a Markov Random Field (MRF) framework. A virtual reconstruction using data collected from the Bruceton Research Mine is presented.
BibTeX
@conference{Wong-2009-122571,author = {Uland Wong and Ben Garney and Warren Whittaker and Red Whittaker},
title = {Camera and LIDAR Fusion for Mapping of Actively Illuminated Subterranean Voids},
booktitle = {Proceedings of 7th International Conference on Field and Service Robotics (FSR '09)},
year = {2009},
month = {July},
pages = {421 - 430},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.