Audio-Visual Learning for Social Telepresence
Abstract Relationships between people are strongly influenced by distance. Even with today’s technology, remote communication is limited to a two-dimensional audio-visual experience and lacks the availability of a shared, three-dimensional space in which people can interact with each other over the distance. Our mission at Reality Labs Research (RLR) in Pittsburgh is to develop such [...]
An autonomous navigation system that could hopefully support RI research
I will show a few videos as the key results of our research in the last several years. These results span the scope of state estimation, mapping, autonomous navigation, and exploration. While these results illustrate separate pieces of work, the underlying modules contribute to a final, integrated autonomy system in the end. I will show a simulation [...]
Combining Offline Reinforcement Learning with Stochastic Multi-Agent Planning for Autonomous Driving
Abstract: Fully autonomous vehicles have the potential to greatly reduce vehicular accidents and revolutionize how people travel and how we transport goods. Many of the major challenges for autonomous driving systems emerge from the numerous traffic situations that require complex interactions with other agents. For the foreseeable future, autonomous vehicles will have to share the [...]
Argo Poster Session
Join us for an opportunity to see what Center students have been working on. Check out an Argo AI self-driving car in person, and grab some free appetizers, soft drinks, and Argo AI swag! All are welcome to attend.
Representations in Robot Manipulation: Learning to Manipulate Ropes, Fabrics, Bags, and Liquids
Abstract: The robotics community has seen significant progress in applying machine learning for robot manipulation. However, much manipulation research focuses on rigid objects instead of highly deformable objects such as ropes, fabrics, bags, and liquids, which pose challenges due to their complex configuration spaces, dynamics, and self-occlusions. To achieve greater progress in robot manipulation of [...]
Human-to-Robot Imitation in the Wild
Abstract: In this talk, I approach the problem of learning by watching humans in the wild. While traditional approaches in Imitation and Reinforcement Learning are promising for learning in the real world, they are either sample inefficient or are constrained to lab settings. Meanwhile, there has been a lot of success in processing passive, unstructured human [...]
Safe and Stable Learning for Agile Robots without Reinforcement Learning
Abstract: My research group (https://aerospacerobotics.caltech.edu/) is working to systematically leverage AI and Machine Learning techniques towards achieving safe and stable autonomy of safety-critical robotic systems, such as robot swarms and autonomous flying cars. Another example is LEONARDO, the world's first bipedal robot that can walk, fly, slackline, and skateboard. Stability and safety are often research problems [...]
Towards editable indoor lighting estimation
Abstract: Combining virtual and real visual elements into a single, realistic image requires the accurate estimation of the lighting conditions of the real scene. In recent years, several approaches of increasing complexity---ranging from simple encoder-decoder architecture to more sophisticated volumetric neural rendering---have been proposed. While the quality of automatic estimates has increased, they have the unfortunate downside [...]
Causal Robot Learning for Manipulation
Abstract: Two decades into the third age of AI, the rise of deep learning has yielded two seemingly disparate realities. In one, massive accomplishments have been achieved in deep reinforcement learning, protein folding, and large language models. Yet, in the other, the promises of deep learning to empower robots that operate robustly in real-world environments [...]