![Loading Events](https://www.ri.cmu.edu/app/plugins/the-events-calendar/src/resources/images/tribe-loading.gif)
PhD Speaking Qualifier
February
![](https://www.ri.cmu.edu/app/uploads/2018/01/Xingyu.Lin_.jpeg)
Carnegie Mellon University
1:00 pm to 2:00 pm
WEH 5421
Abstract:
Deep reinforcement learning has achieved many successes over the recent years. However, its high sample complexity and the difficulty in specifying a reward function have limited its application. In this talk, I will take a representation learning perspective towards these issues. Is it possible to map from the raw observation, potentially in high dimension, to a low dimension representation where learning from this representation will be more efficient? Is it beneficial to define a reward function based on the representation?
The talk will be in three parts. First, I will talk about how to combine a variety of self-supervised auxiliary tasks to learn a better representation for the control tasks at hand. Second, I will talk about how to utilize an indicator reward function, a simple but strong baseline for learning goal-conditioned policies without explicit reward specification. Finally, I will briefly introduce SoftGym, our recently proposed benchmark for deformable object manipulation, highlighting the challenges in learning from high dimension representation.
Committee:
David Held
Oliver Kroemer
Abhinav Gupta
Adithya Murali