In recent years, significant progress has been made towards achieving autonomous roadway navigation using video images. None of the systems developed take full advantage of all the information in the 512×512 pixel, 30 frame/second color image sequence. This can be attributed to the large amount of data which is present in the color video image stream (22.5 Mbytes/second) as well as the limited amount of computing resources available to the systems. We have increased the computing power available to the system by using a data parallel computer. Specifically, a single instruction, multiple data (SIMD) machine was used to develop simple and efficient parallel algorithms, largely based on connectionist techniques, which can process every pixel in the incoming 30 frame/second, color video image stream. The system presented here uses substantially larger frames and processes them at faster rates than other color road following systems. This is achievable through the use of algorithms specifically designed for a fine-grained parallel machine as opposed to ones ported from existing systems to parallel architectures. The algorithms presented here were tested on 4K and 16K processor MasPar MP-1 and on 4K, 8K, and 16K processor MasPar MP-2 parallel machines and were used to drive Carnegie Mellon’s testbed vehicle, the Navlab I, on paved roads near campus.