Web Stereo Video Supervision for Depth Prediction from Dynamic Scenes - Robotics Institute Carnegie Mellon University

Web Stereo Video Supervision for Depth Prediction from Dynamic Scenes

C. Wang, S. Lucey, F. Perazzi, and O. Wang
Conference Paper, Proceedings of International Conference on 3D Vision (3DV '19), pp. 348 - 357, September, 2019

Abstract

We present a fully data-driven method to compute depth from diverse monocular video sequences that contain large amounts of non-rigid objects, e.g., people. In order to learn reconstruction cues for non-rigid scenes, we introduce a new dataset consisting of stereo videos scraped in-the-wild. This dataset has a wide variety of scene types, and features large amounts of nonrigid objects, especially people. From this, we compute disparity maps to be used as supervision to train our approach. We propose a loss function that allows us to generate a depth prediction even with unknown camera intrinsics and stereo baselines in the dataset. We validate the use of large amounts of Internet video by evaluating our method on existing video datasets with depth supervision, including SINTEL, and KITTI, and show that our approach generalizes better to natural scenes.

BibTeX

@conference{Wang-2019-121007,
author = {C. Wang and S. Lucey and F. Perazzi and O. Wang},
title = {Web Stereo Video Supervision for Depth Prediction from Dynamic Scenes},
booktitle = {Proceedings of International Conference on 3D Vision (3DV '19)},
year = {2019},
month = {September},
pages = {348 - 357},
}