Acoustic Structure from Motion
Abstract
Although the ocean spans most of the Earth’s surface, our ability to explore and perform tasks underwater is still limited by high costs and slow, inefficient 3D mapping and localization techniques. Due to the short propagation range of light underwater, imaging sonar or forward looking sonar (FLS) is commonly used for autonomous underwater vehicle (AUV) navigation and perception. A FLS pro- vides bearing and range information to a target, but the elevation of the target is unknown within the sensor’s field of view. Hence, current state-of-the-art tech- niques commonly make a flat surface (planar) assumption so that the FLS data can be used for navigation. Towards expanding the possibilities of underwater operations, a novel approach, entitled acoustic structure from motion (ASFM), is presented for recovering 3D scene structure from multiple 2D sonar images, while at the same time localizing the sonar. Unlike other methods, ASFM does not re- quire a flat surface assumption and is capable of utilizing information from many frames, as opposed to pairwise methods that can only gather information from two frames at once. The optimization of several sonar readings of the same scene from different poses, the acoustic equivalent of bundle adjustment, and automatic data association is formulated and evaluated on both simulated data and real FLS sonar data.
BibTeX
@mastersthesis{Huang-2016-5513,author = {Tiffany Huang},
title = {Acoustic Structure from Motion},
year = {2016},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-16-08},
}