Modeling Rugged Terrain by Mobile Robots with Multiple Sensors
Abstract
Modeling the environment is an essential capability for autonomous robots. To navigate and manipulate without direct human control, autonomous robots must sense the environment, model the environment, and plan and execute actions based on information from the model. Perceiving and mapping rugged terrain from multiple sensor data is an important problem for autonomous navigation and manipulation on other planets, seafloors, hazardous waste sites, and mines. In this thesis, we develop 3-D vision techniques for incrementally building an accurate 3-D representation of rugged terrain using 3-D information acquired from multiple sensors. This thesis develops the locus method to model the rugged terrain. The locus method exploits sensor geometry to efficiently build a terrain representation from multiple sensor data. We apply the locus method to accurately convert a range image into an elevation map. We also use the locus method for sensor fusion, combining color and range data, and range and Digital Elevation Map (DEM) data. Incrementally modeling the terrain from a sequence of range images requires an accurate estimate of motion between successive images. In rugged terrain, estimating motion accurately is difficult because of occlusions and irregularity. We show how to extend the locus method to pixel-based terrain matching, called the iconic matching method, to solve these problems. To achieve the required accuracy in the motion estimate, our terrain matching method combines feature matching, iconic matching , and Inertial Navigation Sensor data. Over a long distance of robot motion, it is difficult to avoid error accumulation in a composite terrain map if only local observations are used. However, a prior DEM can reduce this error accumulation if we estimate the vehicle position in the DEM. We apply the locus method to estimate the vehicle position in the DEM by matching a sequence of range images with the DEM. Experimental results from large scale real and synthetic terrains demonstrate the feasibility and power of the 3-D mapping techniques for rugged terrain. In real world experiments, we built a composite terrain map by merging 125 real range images over a distance of 100 meters. Using synthetic range images we produced a composite map of 150 meters from 159 individual images. Autonomous navigation requires high-level scene descriptions as well as geometrical representation of the natural terrain environments. We present new algorithms for extracting topographic features (peaks, pits, ravines, and ridges) from contour maps which are obtained from elevation maps. Experimental results on a DEM supports our approach for extracting topographic features. In this work, we develop a 3-D vision system for modeling rugged terrain. With this system, mobile robots operating in rugged environments will be able to build accurate terrain models from multiple sensor data.
BibTeX
@phdthesis{Kweon-1991-13196,author = {In So Kweon},
title = {Modeling Rugged Terrain by Mobile Robots with Multiple Sensors},
year = {1991},
month = {January},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
}