3:30 pm to 4:30 pm
Newell-Simon Hall 1305
In this talk, I will be discussing the role of learning representations for robots that interact with humans and robots that interactively learn from humans through a few different vignettes. I will first discuss how bounded rationality of humans guided us towards developing learned latent action spaces for shared autonomy. It turns out this “bounded rationality” is not a bug and a feature — i.e. we can develop extremely efficient coordination algorithms by learning latent representations of partner strategies and operating in this low dimensional space. I will then discuss how we can go about actively learning such representations capturing human preferences including our recent work on how large language models can help design human preference reward functions. Finally, I will end the talk with a discussion of the type of representations useful for learning a robotics foundation model and some preliminary results on a new model that leverages language supervision to shape representations.
Bio: I am an Assistant Professor in the Computer Science Department at Stanford University. My research interests lie at the intersection of robotics, machine learning, and control theory. Specifically, my group is interested in developing efficient algorithms for safe, reliable, and adaptive human-robot and generally multi-agent interactions. I have received my doctoral degree in Electrical Engineering and Computer Sciences (EECS) at UC Berkeley in 2017, and have received my bachelor’s degree in EECS at UC Berkeley in 2012.