Photometric Bundle Adjustment for Vision-Based SLAM
Abstract
We propose a novel algorithm for the joint refinement of structure and motion parameters from image data directly without relying on fixed and known correspondences. In contrast to traditional bundle adjustment (BA) where the optimal parameters are determined by minimizing the reprojection error using tracked features, the proposed algorithm relies on maximizing the photometric consistency and estimates the correspondences implicitly. Since the proposed algorithm does not require correspondences, its application is not limited to corner-like structure; any pixel with nonvanishing gradient could be used in the estimation process. Furthermore, we demonstrate the feasibility of refining the motion and structure parameters simultaneously using the photometric in unconstrained scenes and without requiring restrictive assumptions such as planarity. The proposed algorithm is evaluated on range of challenging outdoor datasets, and it is shown to improve upon the accuracy of the state-of-the-art VSLAM methods obtained using the minimization of the reprojection error using traditional BA as well as loop closure.
Final version will appear on link.springer.com
BibTeX
@conference{Alismail-2016-5532,author = {Hatem Said Alismail and Brett Browning and Simon Lucey},
title = {Photometric Bundle Adjustment for Vision-Based SLAM},
booktitle = {Proceedings of Asian Conference on Computer Vision (ACCV '16)},
year = {2016},
month = {May},
pages = {324 - 341},
publisher = {Springer},
}