Abstract:
Interactions with the physical world are at the core of robotics. However, robotics research, especially in manipulation, has been mainly focused on tasks with limited interactions with the physical world such as pick-and-place or pushing objects on the table top. These interactions are often quasi-static, have predefined or limited sequence of contact events and involve limited object motions. In contrast, humans interact with the surroundings with dynamic and contact-rich motor skills, which enables us to perform a wider variety of tasks in a greater range of settings.
One approach to equip robots with a wider range of skills is via Reinforcement Learning (RL). Although RL shows tremendous success in games or simulated domains, advancing robot motor skills with RL remains challenging. In this thesis, we discuss three challenges when applying RL to learn complex motor skills and our approaches to address them. First, motor skill data in the real world is limited because data collection on real robots is time-consuming and expensive. To effectively reuse real robot data, we propose an offline RL algorithm that is able to train a policy on a static dataset without additional data collection. Second, complex motor skills are often assumed to be limited by its hardware design. We propose to enhance the capability of a robot beyond its hardware by exploiting the external environment which shows dynamic and contact-rich emergent behaviors. Third, it is difficult to learn complex motor skills for long-horizon tasks. To address this challenge, we propose an RL framework with a better action representation that significantly simplifies a complex task and learns dynamic and contact-rich interactions that generalize across objects. In the proposed work, we plan to extend the completed work on action representation to demonstrate its benefits across a wider variety of tasks with fewer assumptions.
Thesis Committee Members:
David Held, Chair
Abhinav Gupta
Oliver Kroemer
Vincent Vanhoucke, Google