My research lies at the intersection of robotics, machine learning, and computer vision.
I am interested in developing methods for robotic perception and control that can allow robots to operate in in the messy, cluttered environments of our daily lives. My approach is to design new deep learning / machine learning algorithms to understand environmental changes: how dynamic objects in the environment can move and how to affect the environment to achieve a desired task.
I have applied this idea of learning to understand environmental changes to improve a robot’s capabilities in two domains: object manipulation and autonomous driving. I am currently working on learning to control indoor robots for various object manipulation tasks, dealing with questions about multi-task learning, robust learning, simulation to real-world transfer, and safety. Within autonomous driving, I have shown how, by modeling object appearance changes, we can improve a robot’s capabilities for every part of the robot perception pipeline: segmentation, tracking, velocity estimation, and object recognition. By teaching robots to understand and affect environmental changes, I hope to open the door to many new robotics applications, such as robots for our homes, assisted living facilities, schools, hospitals, or disaster relief areas.
Research Topics
- Robot Programming by Demonstration
- Active Perception
- Robot Learning for Manipulation
- Perceptual Robotics
- Computer Vision
- Visual Servoing and Visual Tracking
- Learning and Classification
- 3-D Vision and Recognition
- Human-Centered Robotics
- Neurorobotics: From Vision to Action
- Robotics Foundations
- Sensing & Perception
- Manipulation & Interfaces
- Human Robot Collaboration
- Neurorobotics: From Vision to Action
- Reinforcement Learning
- Neurorobotics: From Vision to Action
- Deep Learning