Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks - Robotics Institute Carnegie Mellon University

Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks

Tech. Report, CMU-RI-TR-94-39, Robotics Institute, Carnegie Mellon University, October, 1994

Abstract

ALVINN is a simulated neural network for road following. In its most basic form, it is trained to take a subsampled, preprocessed video image as input, and produce a steering wheel position as output. ALVINN has demonstrated robust performance in a wide variety of situations, but is limited due to its lack of geometric models. Grafting geometric reasoning onto a non-geometric base would be difficult and would create a system with diluted capabilities. A much better approach is to leave the basic neural network intact, preserving its real-time performance and generalization capabilities, and to apply geometric transformations to the input image and the output steering vector. These transformations form a new set of tools and techniques called Virtual Active Vision. The thesis for this work is: Virtual Active Vision tools will improve the capabilities of neural network based autonomous driving systems.

BibTeX

@techreport{Jochem-1994-13782,
author = {Todd Jochem},
title = {Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks},
year = {1994},
month = {October},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-94-39},
}