Monocular Depth from Small Motion Video Accelerated - Robotics Institute Carnegie Mellon University

Monocular Depth from Small Motion Video Accelerated

C. Ham, M. Chang, S. Lucey, and S. Singh
Conference Paper, Proceedings of International Conference on 3D Vision (3DV '17), pp. 575 - 583, October, 2017

Abstract

We propose a novel four-stage pipeline densely reconstructing depth from video sequences with small baselines with the goal of being fast. In our pipeline we make use of the sub-pixel precision of direct photometric bundle adjustment to reduce the number of tracked points required to estimate an accurate pose. Instead of using the exhaustive plane sweeping approach of existing small baseline methods, dense depth maps are calculated efficiently using an algorithm inspired by PatchMatch. Instead of a stereo matching error, our algorithm minimizes the variance over multiple frames with a robust mean estimation. The experiment results suggest that our method has the ability to cope with a wider range of baselines and sequence sizes. We also compare qualitative results on real small motion clips from Ha et al. in addition to our own, and show that our method outputs dense depth maps of similar or better quality and at least 10x faster.

BibTeX

@conference{Ham-2017-121030,
author = {C. Ham and M. Chang and S. Lucey and S. Singh},
title = {Monocular Depth from Small Motion Video Accelerated},
booktitle = {Proceedings of International Conference on 3D Vision (3DV '17)},
year = {2017},
month = {October},
pages = {575 - 583},
}