Robust Monocular Visual Odometry for a Ground Vehicle in Undulating Terrain - Robotics Institute Carnegie Mellon University

Robust Monocular Visual Odometry for a Ground Vehicle in Undulating Terrain

Conference Paper, Proceedings of 8th International Conference on Field and Service Robots (FSR '12), pp. 311 - 326, July, 2012

Abstract

Here we present a robust method for monocular visual odometry capable of accurate position estimation even when operating in undulating terrain. Our algorithm uses a steering model to separately recover rotation and translation. Robot 3DOF orientation is recovered by minimizing image projection error, while, robot translation is recovered by solving an NP-hard optimization problem through an approximation. The decoupled estimation ensures a low computational cost. The proposed method handles undulating terrain by approximating ground patches as locally flat but not necessarily level, and recovers the inclination angle of the local ground in motion estimation. Also, it can automatically detect when the assumption is violated by analysis of the residuals. If the imaged terrain cannot be sufficiently approximated by locally flat patches, wheel odometry is used to provide robust estimation. Our field experiments show a mean relative error of less than 1%.

BibTeX

@conference{Zhang-2012-7638,
author = {Ji Zhang and Sanjiv Singh and George A. Kantor},
title = {Robust Monocular Visual Odometry for a Ground Vehicle in Undulating Terrain},
booktitle = {Proceedings of 8th International Conference on Field and Service Robots (FSR '12)},
year = {2012},
month = {July},
pages = {311 - 326},
}