Synthetic Data Generation for Deep Learning of Underwater Disparity Estimation - Robotics Institute Carnegie Mellon University

Synthetic Data Generation for Deep Learning of Underwater Disparity Estimation

Elizabeth Olson, Corina Barbalata, Junming Zhang, Katherine Skinner, and M. Johnson-Roberson
Conference Paper, Proceedings of IEEE/MTS Oceans: Charleston (OCEANS '18), October, 2018

Abstract

In this paper, we present a new methodology to generate synthetic data for training a deep neural network (DNN) to estimate depth maps directly from stereo images of underwater scenes. The proposed method projects real underwater images onto landscapes of randomized heights in a 3D rendering framework. This procedure provides a synthetic stereo image pair and the corresponding depth map of the scene, which are used to train a disparity estimation DNN. Through this process, we learn to match the underwater feature space using supervised learning without the need to capture extensive real underwater depth maps for ground truth. In our results, we demonstrate improved accuracy of reconstruction compared to traditional computer vision feature matching methods and state-of-the-art DNNs trained on synthetic terrestrial data.

BibTeX

@conference{Olson-2018-130157,
author = {Elizabeth Olson and Corina Barbalata and Junming Zhang and Katherine Skinner and M. Johnson-Roberson},
title = {Synthetic Data Generation for Deep Learning of Underwater Disparity Estimation},
booktitle = {Proceedings of IEEE/MTS Oceans: Charleston (OCEANS '18)},
year = {2018},
month = {October},
}