Student Talks
Improving Robotic Exploration with Self-Supervision and Diverse Data
Abstract: Reinforcement learning (RL) holds great promise for improving robotics, as it allows systems to move beyond passive learning and interact with the world while learning from these interactions. A key aspect of this interaction is exploration: which actions should an RL agent take to best learn about the world? Prior work on exploration is typically [...]
An Extension to Model Predictive Path Integral Control and Modeling Considerations for Off-road Autonomous Driving in Complex Environment
Abstract: The ability to traverse complex environments and terrains is critical to autonomously driving off-road in a fast and safe manner. Challenges such as terrain navigation and vehicle rollover prevention become imperative due to the off-road vehicle configuration and the operating environment itself. This talk will introduce some of these challenges and the different tools [...]
Carnegie Mellon University
Heuristic Search Based Planning by Minimizing Anticipated Search Efforts
Abstract: We focus on relatively low dimensional robot motion planning problems, such as planning for navigation of a self-driving vehicle, unmanned aerial vehicles (UAVs), and footstep planning for humanoids. In these problems, there is a need for fast planning, potentially compromising the solution quality. Often, we want to plan fast but are also interested in [...]
Combining Offline Reinforcement Learning with Stochastic Multi-Agent Planning for Autonomous Driving
Abstract: Fully autonomous vehicles have the potential to greatly reduce vehicular accidents and revolutionize how people travel and how we transport goods. Many of the major challenges for autonomous driving systems emerge from the numerous traffic situations that require complex interactions with other agents. For the foreseeable future, autonomous vehicles will have to share the [...]
Human-to-Robot Imitation in the Wild
Abstract: In this talk, I approach the problem of learning by watching humans in the wild. While traditional approaches in Imitation and Reinforcement Learning are promising for learning in the real world, they are either sample inefficient or are constrained to lab settings. Meanwhile, there has been a lot of success in processing passive, unstructured human [...]
Causal Robot Learning for Manipulation
Abstract: Two decades into the third age of AI, the rise of deep learning has yielded two seemingly disparate realities. In one, massive accomplishments have been achieved in deep reinforcement learning, protein folding, and large language models. Yet, in the other, the promises of deep learning to empower robots that operate robustly in real-world environments [...]
Dense Reconstruction of Dynamic Structures from Monocular RGB Videos
Abstract: We study the problem of 3D reconstruction of {\em generic} and {\em deformable} objects and scenes from {\em casually-taken} RGB videos, to create a system for capturing the dynamic 3D world. Being able to reconstruct dynamic structures from casual videos allows one to create avatars and motion references for arbitrary objects without specialized devices, [...]
Differentiable Collision Detection
Abstract: Collision detection between objects is critical for simulation, control, and learning for robotic systems. However, existing collision detection routines are inherently non-differentiable, limiting their applications in gradient-based optimization tools. In this talk, I present DCOL: a fast and fully differentiable collision-detection framework that reasons about collisions between a set of composable and highly expressive [...]
On Interaction, Imitation, and Causation
Abstract: A standard critique of machine learning models (especially neural networks) is that they pick up on spurious correlations rather than causal relationships and are therefore brittle in the face of distribution shift. Solving this problem in full generality is impossible (i.e. there might be no good way to distinguish between the two). However, if [...]
Learning via Visual-Tactile Interaction
Abstract: Humans learn by interacting with their surroundings using all of their senses. The first of these senses to develop is touch, and it is the first way that young humans explore their environment, learn about objects, and tune their cost functions (via pain or treats). Yet, robots are often denied this highly informative and [...]