Language: You’ve probably heard of it, read it, written it, gestured it, mimed it… Why can’t robots?
Abstract: Language is how meaning is conveyed between humans, and now the basis of foundation models. By implication, it's the most important modality for all of AGI and will replace the entire robotics control stack as the most important thing for all of us to work on.
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Foundation Models for Robotic Manipulation: Opportunities and Challenges
Abstract: Foundation models, such as GPT-4 Vision, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and [...]
Learning with Less
Abstract: The performance of an AI is nearly always associated with the amount of data you have at your disposal. Self-supervised machine learning can help – mitigating tedious human supervision – but the need for massive training datasets in modern AI seems unquenchable. Sometimes it is not the amount of data, but the mismatch of [...]
Human Perception of Robot Failure and Explanation During a Pick-and-Place Task
Abstract: In recent years, researchers have extensively used non-verbal gestures, such as head and arm movements, to express the robot's intentions and capabilities to humans. Inspired by past research, we investigated how different explanation modalities can aid human understanding and perception of how robots communicate failures and provide explanations during block pick-and-place tasks. Through an in-person [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Why We Should Build Robot Apprentices And Why We Shouldn’t Do It Alone
Abstract: For robots to be able to truly integrate human-populated, dynamic, and unpredictable environments, they will have to have strong adaptive capabilities. In this talk, I argue that these adaptive capabilities should leverage interaction with end users, who know how (they want) a robot to act in that environment. I will present an overview of [...]
Learning Distributional Models for Relative Placement
Abstract: Relative placement tasks are an important category of tasks in which one object needs to be placed in a desired pose relative to another object. Previous work has shown success in learning relative placement tasks from just a small number of demonstrations, when using relational reasoning networks with geometric inductive biases. However, such methods fail [...]
Robust Body Exposure (RoBE): A Graph-based Dynamics Modeling Approach to Manipulating Blankets over People
Abstract: Robotic caregivers could potentially improve the quality of life of many who require physical assistance. However, in order to assist individuals who are lying in bed, robots must be capable of dealing with a significant obstacle: the blanket or sheet that will almost always cover the person's body. We propose a method for targeted [...]
Exploration for Continually Improving Robots
Abstract: General purpose robots should be able to perform arbitrary manipulation tasks, and get better at performing new ones as they obtain more experience. The current paradigm in robot learning involves imitation or simulation. Scaling these approaches to learn from more data for various tasks is bottle-necked by human labor required either in collecting demonstrations [...]
Sparse-view 3D in the Wild
Abstract: Reconstructing 3D scenes and objects from images alone has been a long-standing goal in computer vision. We have seen tremendous progress in recent years, capable of producing near photo-realistic renderings from any viewpoint. However, existing approaches generally rely on a large number of input images (typically 50-100) to compute camera poses and ensure view [...]
Deep 3D Geometric Reasoning for Robot Manipulation
Abstract: To solve general manipulation tasks in real-world environments, robots must be able to perceive and condition their manipulation policies on the 3D world. These agents will need to understand various common-sense spatial/geometric concepts about manipulation tasks: that local geometry can suggest potential manipulation strategies, that policies should be invariant across choice of reference frame, [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Toward an ImageNet Moment for Synthetic Data
Abstract: Data, especially large-scale labeled data, has been a critical driver of progress in computer vision. However, many important tasks remain starved of high-quality data. Synthetic data from computer graphics is a promising solution to this challenge, but still remains in limited use. This talk will present our work on Infinigen, a procedural synthetic data [...]
Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World
Abstract: We show that imitating shortest-path planners in simulation produces Stretch RE-1 robotic agents that, given language instructions, can proficiently navigate, explore, and manipulate objects in both simulation and in the real world using only RGB sensors (no depth maps or GPS coordinates). This surprising result is enabled by our end-to-end, transformer-based, SPOC architecture, powerful [...]
Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous Driving via Differentiable Multi-Sensor Kalman Filter
This talk has been postponed […]
Towards diverse zero-shot manipulation via actualizing visual plans
Abstract: In this thesis, we seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation — interacting with unseen objects in novel scenes without test-time adaptation. Robots that can be reliably deployed out-of-the-box in new scenarios have the potential for helping humans in everyday tasks. Not requiring any test-time training through demonstrations or [...]
Deep Learning for Sensors: Development to Deployment
Abstract: Robots rely heavily on sensing to reason about physical interactions, and recent advancements in rapid prototyping, MEMS sensing, and machine learning have led to a plethora of sensing alternatives. However, few of these sensors have gained widespread use among roboticists. This thesis proposes a framework for incorporating sensors into a robot learning paradigm, from [...]
Offline Learning for Stochastic Multi-Agent Planning in Autonomous Driving
Abstract: Fully autonomous vehicles have the potential to greatly reduce vehicular accidents and revolutionize how people travel and how we transport goods. Many of the major challenges for autonomous driving systems emerge from the numerous traffic situations that require complex interactions with other agents. For the foreseeable future, autonomous vehicles will have to share the [...]
Teruko Yata Memorial Lecture
Human-Centric Robots and How Learning Enables Generality Abstract: Humans have dreamt of robot helpers forever. What's new is that this dream is becoming real. New developments in AI, building on foundations of hardware and passive dynamics, enable vastly improved generality. Robots can step out of highly structured environments and become more human-centric: operating in human [...]
2024 Robotics Institute National Robotics Week Celebration Tours and Demos
April 12 1:00 - 4:00 pm: PUBLIC SPACE ROBOTS Open to the public TANK the roboceptionist Newell-Simon Hall 3rd floor entry area Meet Marion (Tank) LeFleur, Newell-Simon’s Roboceptionist. He’ll be glad to see you! The goal of the project is to produce a robot helpmate that is useful, exhibits social competence, and remains compelling to [...]
Creating robust deep learning models involves effectively managing nuisance variables
Abstract: Over the past decade, we have witnessed significant advances in capabilities of deep neural network models in vision and machine learning. However, issues related to bias, discrimination, and fairness in general, have received a great deal of negative attention (e.g., mistakes in surveillance and animal-human confusion of vision models). But bias in AI models [...]
Transfer Learning via Temporal Contrastive Learning Inbox
Abstract: This thesis introduces a novel transfer learning framework for deep reinforcement learning. The approach automatically combines goal-conditioned policies with temporal contrastive learning to discover meaningful sub-goals. The approach involves pre-training a goal-conditioned agent, finetuning it on the target domain, and using contrastive learning to construct a planning graph that guides the agent via sub-goals. Experiments [...]
Towards Influence-Aware Safe Human-Robot Interaction
Abstract: In recent years, we have seen through recommender systems on social media how influential (and potentially harmful) algorithms can be in our lives, sometimes creating polarization and conspiracies that lead to unsafe behavior. Now that robots are also growing more common in the real world, we must be very careful to ensure that they [...]
Learning to Manipulate beyond Imitation
Abstract: Imitation learning has been a prevalent approach for teaching robots manipulation skills but still suffers from scalability and generalizability. In this talk, I'll argue for going beyond elementary behavioral imitation from human demonstrations. Instead, I'll present two key directions: 1) Creating Manipulation Controllers from Pre-Trained Representations, and 2) Representing Video Demonstrations with Parameterized Symbolic [...]