3:30 pm to 12:00 am
Event Location: NSH 1507
Bio: My ultimate goal is to solve the robotics problem: combine vision, perception, planning, and control into one coherent framework to create intelligent and autonomous robots. Since I joined The Robotics Institute, I’ve been working on such an architecture titled OpenRAVE. My current research focuses on statistical planning using machine learning to get robots to autonomously perform complex tasks like making coffee or cleaning the dinner table without requiring programmers to tweak endless parameters or write scripts. I’m very interested in getting humanoid/mobile-manipulator robots in the kitchen to help out with chores. I also work on robot vision and am looking for effective ways to plan around environments using cameras attached to the robot.
Abstract: This presentation focuses on an object-specific vision system that
detects and extracts the precise 6D pose of objects in an image. The
system builds a data-driven statistical model of the expected features
of an object’s surface and combines this with a discrete search method
to extract the pose of all object. The training phase of the vision
system can be interpreted as a compiler that automatically analyzes the
statistics of how the features are distributed on the object and
determines a feature set’s stability and discriminable power. This
compilation phase requires the precise CAD model of an object along with
a training set of real-world images. After compilation, a
CAD-independent model of how features relate with respect to the
object’s pose and inter-relate with each other is created. These
relationships allow both point-based features like SIFT and edge-based
features to be used simultaneously when computing the 6D pose of an
object. Using this data-driven model, we employ a discrete randomized
search with RANSAC to find the poses of all instances of the object in a
novel image.