DispSegNet: Leveraging Semantics for End-to-End Learning of Disparity Estimation from Stereo Imagery
Abstract
Recent work has shown that convolutional neural networks (CNNs) can be applied successfully in disparity estimation, but these methods still suffer from errors in regions of low texture, occlusions, and reflections. Concurrently, deep learning for semantic segmentation has shown great progress in recent years. In this letter, we design a CNN architecture that combines these two tasks to improve the quality and accuracy of disparity estimation with the help of semantic segmentation. Specifically, we propose a network structure in which these two tasks are highly coupled. One key novelty of this approach is the two-stage refinement process. Initial disparity estimates are refined with an embedding learned from the semantic segmentation branch of the network. The proposed model is trained using an unsupervised approach, in which images from one half of the stereo pair are warped and compared against images from the other camera. Another key advantage of the proposed approach is that a single network is capable of outputting disparity estimates and semantic labels. These outputs are of great use in autonomous vehicle operation; with real-time constraints being key, such performance improvements increase the viability of driving applications. Experiments on KITTI and Cityscapes datasets show that our model can achieve state-of-the-art results and that leveraging embedding learned from semantic segmentation improves the performance of disparity estimation.
BibTeX
@article{Zhang-2019-130153,author = {Junming Zhang and Katherine Skinner and Ram Vasudevan and Matthew Johnson-Roberson},
title = {DispSegNet: Leveraging Semantics for End-to-End Learning of Disparity Estimation from Stereo Imagery},
journal = {IEEE Robotics and Automation Letters},
year = {2019},
month = {April},
volume = {4},
number = {2},
pages = {1162 - 1169},
}