3D Perception for Accurate Row Following: Methodology and Results - Robotics Institute Carnegie Mellon University

3D Perception for Accurate Row Following: Methodology and Results

Ji Zhang, Andrew D. Chambers, Silvio Mano Maeta, Marcel Bergerman, and Sanjiv Singh
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5306 - 5313, November, 2013

Abstract

Rows of trees such as in orchards, planted in straight parallel lines can provide navigation cues for autonomous machines that operate in between them. When the tree canopies are well managed, tree rows appear similar to corridor walls and a simple 2D sensing scheme suffices. However, when the tree canopies are three dimensional, or ground vegetation occludes tree trunks, it is necessary to use a three dimensional sensing mode. An additional complication in prolific canopies is that GPS is not reliable and hence is not suitable to register data from sensors onboard a traversing vehicle. Here, we present a method to register 3D data from a lidar sensor onboard a vehicle that must accurately determine its pose relative to the rows. We first register point cloud into a common reference frame and then determine the position of tree rows and trunks in the vicinity to determine the vehicle pose. Our method is tested online and with data from commercial orchards. Experimental results show that the accuracy is sufficient to enable accurate traversal between tree rows even when tree canopies do not approximate planar walls.

Notes
accepted

BibTeX

@conference{Zhang-2013-7789,
author = {Ji Zhang and Andrew D. Chambers and Silvio Mano Maeta and Marcel Bergerman and Sanjiv Singh},
title = {3D Perception for Accurate Row Following: Methodology and Results},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2013},
month = {November},
pages = {5306 - 5313},
}