Vision-Based Robotic Convoy Driving - Robotics Institute Carnegie Mellon University

Vision-Based Robotic Convoy Driving

Henry Schneiderman, M. Nashman, A. J. Wavering, and R. Lumia
Conference Paper, Proceedings of Machine Vision and Applications, Vol. 8, No. 6, pp. 359 - 364, November, 1995

Abstract

This article describes a method for vision-based autonomous convoy driving in which a robotic vehicle autonomously pursues another vehicle. Pursuit is achieved by visually tracking a target mounted on the back of the pursued vehicle. Visual tracking must be robust, since a failure leads to catastrophic results. To make our system as reliable as possible, uncertainty is accounted for in each measurement and propagated through all computations. We use a best linear unbiased estimate (BLUE) of the target's position in each separate image, and a polynomial least-mean-square fit (LMSF) to estimate the target's motion. Robust autonomous convoy driving has been demonstrated in the presence of various lighting conditions, shadowing, other vehicles, turns at intersections, curves, and hills. A continuous, autonomous, convoy drive of over 33 km (20 miles) was successful, at speeds averaging between 50 and 75km/h (30–45 miles/h).

BibTeX

@conference{Schneiderman-1995-16118,
author = {Henry Schneiderman and M. Nashman and A. J. Wavering and R. Lumia},
title = {Vision-Based Robotic Convoy Driving},
booktitle = {Proceedings of Machine Vision and Applications},
year = {1995},
month = {November},
volume = {8},
pages = {359 - 364},
}