ROLS: Robust Object-level SLAM for grape counting - Robotics Institute Carnegie Mellon University

ROLS: Robust Object-level SLAM for grape counting

Anjana K. Nellithimaru and George Kantor
Workshop Paper, CVPR '19 Workshops, pp. 2648 - 2656, June, 2019

Abstract

Camera based Simultaneous Localization and Mapping (SLAM) in an agricultural field can be used by crop growers to count fruits and estimate yield. It is challenging due to dynamics, illumination conditions and limited texture inherent in an outdoor environment. We propose a pipeline that combines the recent advances in deep learning with traditional 3D processing techniques to achieve fast and accurate SLAM in vineyards. We use images captured by a stereo camera and their 3D reconstruction to detect objects of interest and divide them into classes: grapes, leaves and branches. The accuracy of these detections is improved by leveraging information about objects’ local neighborhood in 3D. We achieve a F1 score of 0.977 with ground truth grape counts from images. Our method builds a dense 3D model of the scene with a localization accuracy in centimeters without any assumption of constant illumination conditions or scene dynamics. This method can be easily generalized to other crops such as oranges and apples with minor modifications in the pipeline.

BibTeX

@workshop{Nellithimaru-2019-119995,
author = {Anjana K. Nellithimaru and George Kantor},
title = {ROLS: Robust Object-level SLAM for grape counting},
booktitle = {Proceedings of CVPR '19 Workshops},
year = {2019},
month = {June},
pages = {2648 - 2656},
}