Real-time Computational Needs of a Multisensor Feature-Based Range-Estimation Method
Abstract
The computer vision literature describes many methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. Methods may be broadly categorized into field-based techniques and feature-based techniques. Field-based techniques have the advantage of regular computational structure at every pixel throughout the image plane. Feature-based techniques are much more data driven in that computational complexity increases dramatically in regions of the image populated by features. It is widely believed that to run computer vision algorithms in real time a parallel architecture is necessary. Field-based techniques lend themselves to easy parallelization due to their regular computational needs. However, we have found that field-based methods are sensitive to noise and have traditionally been difficult to generalize to arbitrary vehicle motion. Therefore, we have sought techniques to parallelize feature-based methods. This paper describes the computational needs of a parallel feature-based range-estimation method developed by NASA Ames. Issues of processing-element performance, load balancing, and data-flow bandwidth are addressed along with a performance review of two architectures on which the feature-based method has been implemented.
BibTeX
@conference{Suorsa-1993-13454,author = {Raymond Suorsa and Banavar Sridhar and Terrence W. Fong},
title = {Real-time Computational Needs of a Multisensor Feature-Based Range-Estimation Method},
booktitle = {Proceedings of SPIE Sensor Fusion and Aerospace Applications},
year = {1993},
month = {September},
volume = {1956},
pages = {57 - 69},
publisher = {SPIE},
keywords = {parallel processing, optical flow},
}