Robotics Framework for Automated Construction of Autonomous Manipulation Programs - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

October

16
Fri
Rosen Diankov Carnegie Mellon University
Friday, October 16
3:30 pm to 12:00 am
Robotics Framework for Automated Construction of Autonomous Manipulation Programs

Event Location: Newell Simon Hall 3305

Abstract: Society is becoming more automated with robots beginning to perform most tasks in factories and starting to help out in home and office environments. Arguably, one of the most important functions of robots is the ability to manipulate their environment to accomplish basic tasks. However, the space of possible robot designs, sensor modalities, and target tasks is so huge that many models, databases, and programs have to be manually re-created once the designs or requirements change. This thesis introduces a framework that, given a robot and a task specification, can automatically construct the necessary databases and programs required for the robot to robustly execute the task. The construction process relies on minimal human intervention beyond requiring real-world sensor data for feature and noise model computation. Furthermore, the execution part of the framework exploits both camera visibility capabilities and a multitude of motion planners to accomplish the target manipulation tasks while recovering from execution errors. Using this framework and providing the necessary specifications, we show how any robot following a loose set of design constraints can be automatically setup to recognize and autonomously manipulate a wide variety of rigid objects.


The thesis begins with a set of guidelines for autonomous manipulation and defines the task and robot domain in which the problem is solved in. We first propose a robot execution framework that meets autonomy requirements in three categories: environment complexity, sensor uncertainty and noise handling, and error recovery performance. Using these guidelines, we identify the necessary models that have to be automatically trained from the task and robot specifications. On the manipulation planning side, we analyze object grasping, several classes of randomized planners, maintaining task constraints, computing goal configurations, and models for increasing planner performance by geometrically analyzing the manipulator reachability spaces and swept volumes. On the perception side, we cover the design and implementation of a vision compiler that can automatically generate an object-specific pose recognition program for a broad range of rigid objects. The generated run-time vision algorithm does not rely on 2D/3D feature correspondences and thus reduces the early-commitment problem plaguing the performance of traditional vision-based pose extraction algorithms. In order to combine the planning and vision frameworks, we introduce a real-time visual-feedback algorithm and consider camera capabilities and vision noise models when determining robot movement.


Real-world tasks on several robot platforms are presented showing the capability of the proposed framework in automatically developing programs for autonomous manipulation.

Committee:Takeo Kanade, Co-chair

James Kuffner, Co-chair

Paul Rybski

Kei Okada, University of Tokyo