Vision Based Tactical Driving - Robotics Institute Carnegie Mellon University

Vision Based Tactical Driving

PhD Thesis, Tech. Report, CMU-RI-TR-96-14, Robotics Institute, Carnegie Mellon University, 1996

Abstract

Much progress has been made toward solving the autonomous lane keeping problem using vision based methods. Systems have been demonstrated which can drive robot vehicles at high speed for long distances. The current challenge for vision based on-road navigation researchers is to create systems that maintain the performance of the existing lane keeping systems, while adding the ability to execute tactical level driving tasks like lane transition and intersection detection and navigation. There are many ways to add tactical functionality to a driving system. Solutions range from developing task specific software modules to grafting additional functionality onto a basic lane keeping system. Solutions like these are problematic because they either make reuse of acquired knowledge difficult or impossible, or preclude the use of alternative lane keeping systems. A more desirable solution is to develop a robust, lane keeper independent control scheme that provides the functionality to execute tactical actions. Based on this hypothesis, techniques that are used to execute tactical level driving tasks should: Be based on a single framework that is applicable to a variety of tactical level actions, Be extensible to other vision based lane keeping systems, and Require little or no modification of the lane keeping system with which it is being used. This thesis examines a framework, called Virtual Active Vision, which provides this functionality through intelligent control of the visual information presented to the lane keeping system. Novel solutions based on this framework for two classes of tactical driving tasks, lane transition and intersection detection and traversal, are presented in detail. Specifically, algorithms which allow the ALVINN lane keeping system to robustly execute lane transition maneuvers like lane changing, entrance and exit ramp detection and traversal, and obstacle avoidance are presented. Additionally, with the aid of active camera control, the ALVINN system enhanced with Virtual Active Vision tools can successfully detect and navigate basic road intersections.

BibTeX

@phdthesis{Jochem-1996-14062,
author = {Todd Jochem},
title = {Vision Based Tactical Driving},
year = {1996},
month = {January},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-96-14},
}