Towards a Good Representation For Reinforcement Learning - Robotics Institute Carnegie Mellon University
Loading Events

PhD Speaking Qualifier

February

14
Fri
Xingyu Lin Robotics Institute,
Carnegie Mellon University
Friday, February 14
1:00 pm to 2:00 pm
WEH 5421
Towards a Good Representation For Reinforcement Learning

Abstract:
Deep reinforcement learning has achieved many successes over the recent years. However, its high sample complexity and the difficulty in specifying a reward function have limited its application. In this talk, I will take a representation learning perspective towards these issues. Is it possible to map from the raw observation, potentially in high dimension, to a low dimension representation where learning from this representation will be more efficient? Is it beneficial to define a reward function based on the representation?

The talk will be in three parts. First, I will talk about how to combine a variety of self-supervised auxiliary tasks to learn a better representation for the control tasks at hand. Second, I will talk about how to utilize an indicator reward function, a simple but strong baseline for learning goal-conditioned policies without explicit reward specification. Finally, I will briefly introduce SoftGym, our recently proposed benchmark for deformable object manipulation, highlighting the challenges in learning from high dimension representation.

Committee:
David Held
Oliver Kroemer
Abhinav Gupta
Adithya Murali