Loading Events

PhD Thesis Defense

August

19
Thu
Rosen Diankov Carnegie Mellon University
Thursday, August 19
11:30 am to 12:00 am
Automated Construction of Robotic Manipulation Programs

Event Location: NSH 3002

Abstract: Society is becoming more automated with robots beginning to perform most tasks in factories and starting to help out in home and office environments. Arguably, one of the most important functions of robots is the ability to manipulate objects in their environment to accomplish primitive tasks. Because the space of possible robot designs, sensor modalities, and target tasks is huge, researchers end up having to manually create many models, databases, and programs for their specific task, an effort that is repeated whenever the task changes. This thesis introduces a manipulation framework that, given a specification for a robot and a task, can automatically construct the necessary databases and programs required for the robot to reliably execute the task. It addresses problems in three main areas critical for manipulation.


We present a geometric-based planning framework that analyses all necessary modalities of manipulation planning and offers efficient algorithms to solve each. This allows identification of the necessary information needed from the task and robot specifications. Using this set of analyses, the construction process then builds a planning knowledge-base that allows the planners to make informative geometric decisions about the structure of the scene and the robot’s goals. We show how to efficiently generate and query the information for planners.


In order to reliably complete the task, we present efficient algorithms that consider the visibility of objects in cameras when choosing manipulation goals. We show real-world results with robots using cameras attached to their grippers to boost accuracy of the detected objects and to reliably complete the tasks. Furthermore, we use the visibility theory to develop a completely automated extrinsic camera calibration method and a present a new measure for computing confidence of the final results.


For the perception side of manipulation, we present a vision-centric database that can analyze a rigid object’s surface for stable and discriminable features to use in pose extraction programs. Furthermore, we show work towards a new object pose extraction algorithm that does not rely on 2D/3D feature correspondences and thus reduces the early-commitment problem plaguing the generality of traditional vision-based pose extraction algorithms.


Poster

Committee:Takeo Kanade, Co-chair

James Kuffner, Co-chair

Paul Rybski

Kei Okada, University of Tokyo