Deep Reinforcement Learning with skill library: Learning and exploration with temporal abstractions using coarse approximate dynamics models - Robotics Institute Carnegie Mellon University
Loading Events

MSR Thesis Defense

June

25
Mon
Arpit Agarwal PhD Student Robotics Institute,
Carnegie Mellon University
Monday, June 25
10:00 am to 11:00 am
NSH A507
Deep Reinforcement Learning with skill library: Learning and exploration with temporal abstractions using coarse approximate dynamics models

Reinforcement learning is a computational approach to learn from interaction. However, learning from scratch using reinforcement learning requires exorbitant number of interactions with the environment even for simple tasks. One way to alleviate the problem is to reuse previously learned skills as done by humans. This thesis provides frameworks and algorithms to build and reuse \textit{Skill Library}. Firstly, we extend the Parameterized Action Space formulation using our Skill Library to multi-goal setting and show improvements in learning using hindsight at coarse level. Secondly, we use our Skill Library for exploring at a coarser level to learn the optimal policy for continuous control. We demonstrate the benefits, in terms of speed and accuracy, of the proposed approaches for a set of real world complex robotic manipulation tasks in which some state-of-the-art methods completely fail.

 

Committee Members:

Katerina Fragkiadaki(Co-chair)

Katharina Muelling(Co-chair)

Oliver Kroemer

Devin Schwab