Student Talks
Exploration for Continually Improving Robots
Abstract: General purpose robots should be able to perform arbitrary manipulation tasks, and get better at performing new ones as they obtain more experience. The current paradigm in robot learning involves imitation or simulation. Scaling these approaches to learn from more data for various tasks is bottle-necked by human labor required either in collecting demonstrations [...]
Sparse-view 3D in the Wild
Abstract: Reconstructing 3D scenes and objects from images alone has been a long-standing goal in computer vision. We have seen tremendous progress in recent years, capable of producing near photo-realistic renderings from any viewpoint. However, existing approaches generally rely on a large number of input images (typically 50-100) to compute camera poses and ensure view [...]
Deep 3D Geometric Reasoning for Robot Manipulation
Abstract: To solve general manipulation tasks in real-world environments, robots must be able to perceive and condition their manipulation policies on the 3D world. These agents will need to understand various common-sense spatial/geometric concepts about manipulation tasks: that local geometry can suggest potential manipulation strategies, that policies should be invariant across choice of reference frame, [...]
Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous Driving via Differentiable Multi-Sensor Kalman Filter
This talk has been postponed […]
Towards diverse zero-shot manipulation via actualizing visual plans
Abstract: In this thesis, we seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation — interacting with unseen objects in novel scenes without test-time adaptation. Robots that can be reliably deployed out-of-the-box in new scenarios have the potential for helping humans in everyday tasks. Not requiring any test-time training through demonstrations or [...]
Deep Learning for Sensors: Development to Deployment
Abstract: Robots rely heavily on sensing to reason about physical interactions, and recent advancements in rapid prototyping, MEMS sensing, and machine learning have led to a plethora of sensing alternatives. However, few of these sensors have gained widespread use among roboticists. This thesis proposes a framework for incorporating sensors into a robot learning paradigm, from [...]
Offline Learning for Stochastic Multi-Agent Planning in Autonomous Driving
Abstract: Fully autonomous vehicles have the potential to greatly reduce vehicular accidents and revolutionize how people travel and how we transport goods. Many of the major challenges for autonomous driving systems emerge from the numerous traffic situations that require complex interactions with other agents. For the foreseeable future, autonomous vehicles will have to share the [...]
Transfer Learning via Temporal Contrastive Learning Inbox
Abstract: This thesis introduces a novel transfer learning framework for deep reinforcement learning. The approach automatically combines goal-conditioned policies with temporal contrastive learning to discover meaningful sub-goals. The approach involves pre-training a goal-conditioned agent, finetuning it on the target domain, and using contrastive learning to construct a planning graph that guides the agent via sub-goals. Experiments [...]
Towards Influence-Aware Safe Human-Robot Interaction
Abstract: In recent years, we have seen through recommender systems on social media how influential (and potentially harmful) algorithms can be in our lives, sometimes creating polarization and conspiracies that lead to unsafe behavior. Now that robots are also growing more common in the real world, we must be very careful to ensure that they [...]
Learning to Manipulate beyond Imitation
Abstract: Imitation learning has been a prevalent approach for teaching robots manipulation skills but still suffers from scalability and generalizability. In this talk, I'll argue for going beyond elementary behavioral imitation from human demonstrations. Instead, I'll present two key directions: 1) Creating Manipulation Controllers from Pre-Trained Representations, and 2) Representing Video Demonstrations with Parameterized Symbolic [...]