Learning Depth from Monocular Videos using Direct Methods - Robotics Institute Carnegie Mellon University

Learning Depth from Monocular Videos using Direct Methods

C. Wang, J. M. Buenaposada, R. Zhu, and S. Lucey
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 2022 - 2030, June, 2018

Abstract

The ability to predict depth from a single image - using recent advances in CNNs - is of increasing interest to the vision community. Unsupervised strategies to learning are particularly appealing as they can utilize much larger and varied monocular video datasets during learning without the need for ground truth depth or stereo. In previous works, separate pose and depth CNN predictors had to be determined such that their joint outputs minimized the photometric error. Inspired by recent advances in direct visual odometry (DVO), we argue that the depth CNN predictor can be learned without a pose CNN predictor. Further, we demonstrate empirically that incorporation of a differentiable implementation of DVO, along with a novel depth normalization strategy - substantially improves performance over state of the art that use monocular videos for training.

BibTeX

@conference{Wang-2018-121017,
author = {C. Wang and J. M. Buenaposada and R. Zhu and S. Lucey},
title = {Learning Depth from Monocular Videos using Direct Methods},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2018},
month = {June},
pages = {2022 - 2030},
}