Carnegie Mellon University
2:00 pm to 3:00 pm
GHC 4405
Abstract:
Imaging sonars have been used for a variety of tasks geared towards increasing autonomy of underwater vehicles: image registration and mosaicing, vehicle localization, object recognition, mapping, and path planning, to name a few. However, the complexity of the image formation has led many algorithms to make the restrictive assumption that the scene geometry is mostly planar, or that the sensor motion is planar, or both. While these assumptions may hold for scenarios such as seafloor mapping and inspecting large, locally flat ship hulls, they do not hold for many other non-planar underwater scenes that are of interest, such as archaeological sites, bridge and pier pilings, and coral reefs. This work aims to develop localization and mapping algorithms that enable the use of imaging sonar on autonomous underwater vehicles in general environments, without making restrictive assumptions about the scene geometry or motion of the vehicle.
We take inspiration from similar localization and mapping algorithms that have been developed for analogous sensors like optical cameras. In particular, for sparse localization and mapping, we formulate the problem of acoustic bundle adjustment, based on the widely known optical bundle adjustment problem. Previous attempts at solving this problem have been subject to high amounts of error in both the localization and map accuracy as they have not analyzed the inherent degeneracies that the imaging sonar sensor model introduces. We consider these degeneracies and provide a flexible solution to the acoustic bundle adjustment problem, as well as a formulation for incorporating measurements resulting from the bundle adjustment into a factor graph framework for efficient, long-term localization.
In this thesis, we also consider the problem of dense 3D mapping with known poses. While many previous attempts at 3D imaging sonar mapping have made strong simplifying assumptions to reduce the complexity of the problem, we observe that more accurate 3D maps may be generated by performing inference based on the generative sensor model. We propose a two step process for reconstructing metrically accurate surface maps: initialization and refinement. The refinement step seeks to adjust an initial estimate of the surface in order to maximize the likelihood of the observed sonar images. This step relies heavily on an initial surface estimate that is sufficiently close to the ground truth model. We propose a search-based initialization procedure based on a single sonar image that also generates a surface estimate based on the generative sensor model. These methods will be evaluated using both simulated sonar imagery as well as real-world imagery of known structures in a test tank environment.
Thesis Committee Members:
Michael Kaess, Chair
Martial Hebert
George Kantor
John Leonard, Massachusetts Institute of Technology