An Extendable Framework for Visual Servoing Using Environment Models - Robotics Institute Carnegie Mellon University

An Extendable Framework for Visual Servoing Using Environment Models

Bradley Nelson and Pradeep Khosla
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, Vol. 1, pp. 184 - 189, May, 1995

Abstract

Visual servoing is a manipulation control strategy that precisely positions objects using imprecisely calibrated camera-lens-manipulator systems. In order to quickly and easily integrate sensor-based manipulation strategies such as visual servoing into robotic systems, a system framework and a task representation must exist which facilitates this integration. The framework must also be extendable so that obsolete sensor systems can be easily replaced or extended as new technologies become available. In this paper; we present a framework for expectation-based visual servoing which visually guides tasks based on the expected visual appearance of the task. The appearance of the task is generated by a model of the environment that uses texture-mapped geometric models to represent objects. A system structure which facilitates the integration of various conjigurations of visual servoing systems is presented, as well as a hardware implementation of the proposed system and experimental results using a stereo camera system.

BibTeX

@conference{Nelson-1995-13883,
author = {Bradley Nelson and Pradeep Khosla},
title = {An Extendable Framework for Visual Servoing Using Environment Models},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {1995},
month = {May},
volume = {1},
pages = {184 - 189},
}