Towards Acoustic Structure from Motion for Imaging Sonar
Abstract
We present a novel approach, entitled acoustic structure from motion (ASFM), for recovering 3D scene structure from multiple 2D sonar images, while at the same time localizing the sonar. Imaging sonar or forward looking sonar (FLS) is commonly used for autonomous underwater vehicle (AUV) navigation. An FLS provides bearing and range information to a target, but the elevation of the target is unknown within the sensor's field of view. Hence, current state-of-the-art techniques commonly make a flat surface (ground) assumption so that the FLS data can be used for navigation. Unlike other methods, our solution does not require a flat surface assumption and is capable of utilizing information from many frames, as opposed to pairwise methods that can only gather information from two frames at once. ASFM is inspired by structure from motion (SFM), the problem of recovering 3D structure from multiple camera images, while also recovering the position and orientation from which the images were taken. In this paper, we formulate and evaluate the optimization of several AUV sensor readings of the same scene from different poses, the sonar equivalent of bundle adjustment. We evaluate our approach on both simulated data and FLS sonar data with the assumption that feature extraction and data association have been completed. The acoustic equivalents of those two important features of SFM are left for future work.
BibTeX
@conference{Huang-2015-6031,author = {Tiffany Huang and Michael Kaess},
title = {Towards Acoustic Structure from Motion for Imaging Sonar},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2015},
month = {September},
pages = {758 - 765},
}