High Level Visual Features for Underwater Place Recognition - Robotics Institute Carnegie Mellon University

High Level Visual Features for Underwater Place Recognition

Jie Li, Ryan Eustice, and M. Johnson-Roberson
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 3652 - 3659, May, 2015

Abstract

This paper reports on a method to perform robust visual relocalization between temporally separated sets of underwater images gathered by a robot. The place recognition and relocalization problem is more challenging in the underwater environment mainly due to three factors: 1) changes in illumination; 2) long-term changes in the visual appearance of features because of phenomena like biofouling on man-made structures and growth or movement in natural features; and 3) low density of visually salient features for image matching. To address these challenges, a patch-based feature matching approach is proposed, which uses image segmentation and local intensity contrast to locate salient patches and HOG description to make correspondences between patches. Compared to traditional point-based features that are sensitive to dramatic appearance changes underwater, patch-based features are able to encode higher level information such as shape or structure which tends to persist across years in underwater environments. The algorithm is evaluated on real data, from multiple years, collected by a Hovering Autonomous Underwater Vehicle for ship hull inspection. Results in relocalization performance across missions from different years are compared to other traditional methods.

BibTeX

@conference{Li-2015-130188,
author = {Jie Li and Ryan Eustice and M. Johnson-Roberson},
title = {High Level Visual Features for Underwater Place Recognition},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2015},
month = {May},
pages = {3652 - 3659},
}