Multimodal Modeling: Learning Beyond Visual Knowledge
Abstract: The computer vision community has embraced the success of learning specialist models by training with a fixed set of predetermined object categories, such as ImageNet or COCO. However, learning only from visual knowledge might hinder the flexibility and generality of visual models, which requires additional labeled data to specify any other visual concept and [...]
Improving Robotic Exploration with Self-Supervision and Diverse Data
Abstract: Reinforcement learning (RL) holds great promise for improving robotics, as it allows systems to move beyond passive learning and interact with the world while learning from these interactions. A key aspect of this interaction is exploration: which actions should an RL agent take to best learn about the world? Prior work on exploration is typically [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
An Extension to Model Predictive Path Integral Control and Modeling Considerations for Off-road Autonomous Driving in Complex Environment
Abstract: The ability to traverse complex environments and terrains is critical to autonomously driving off-road in a fast and safe manner. Challenges such as terrain navigation and vehicle rollover prevention become imperative due to the off-road vehicle configuration and the operating environment itself. This talk will introduce some of these challenges and the different tools [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Carnegie Mellon University
Heuristic Search Based Planning by Minimizing Anticipated Search Efforts
Abstract: We focus on relatively low dimensional robot motion planning problems, such as planning for navigation of a self-driving vehicle, unmanned aerial vehicles (UAVs), and footstep planning for humanoids. In these problems, there is a need for fast planning, potentially compromising the solution quality. Often, we want to plan fast but are also interested in [...]
Robotic Cave Exploration for Search, Science, and Survey
Abstract: Robotic cave exploration has the potential to create significant societal impact through facilitating search and rescue, in the fight against antibiotic resistance (science), and via mapping (survey). But many state-of-the-art approaches for active perception and autonomy in subterranean environments rely on disparate perceptual pipelines (e.g., pose estimation, occupancy modeling, hazard detection) that process the same underlying sensor data in different [...]
Audio-Visual Learning for Social Telepresence
Abstract Relationships between people are strongly influenced by distance. Even with today’s technology, remote communication is limited to a two-dimensional audio-visual experience and lacks the availability of a shared, three-dimensional space in which people can interact with each other over the distance. Our mission at Reality Labs Research (RLR) in Pittsburgh is to develop such [...]
An autonomous navigation system that could hopefully support RI research
I will show a few videos as the key results of our research in the last several years. These results span the scope of state estimation, mapping, autonomous navigation, and exploration. While these results illustrate separate pieces of work, the underlying modules contribute to a final, integrated autonomy system in the end. I will show a simulation [...]
Combining Offline Reinforcement Learning with Stochastic Multi-Agent Planning for Autonomous Driving
Abstract: Fully autonomous vehicles have the potential to greatly reduce vehicular accidents and revolutionize how people travel and how we transport goods. Many of the major challenges for autonomous driving systems emerge from the numerous traffic situations that require complex interactions with other agents. For the foreseeable future, autonomous vehicles will have to share the [...]
Argo Poster Session
Join us for an opportunity to see what Center students have been working on. Check out an Argo AI self-driving car in person, and grab some free appetizers, soft drinks, and Argo AI swag! All are welcome to attend.
Representations in Robot Manipulation: Learning to Manipulate Ropes, Fabrics, Bags, and Liquids
Abstract: The robotics community has seen significant progress in applying machine learning for robot manipulation. However, much manipulation research focuses on rigid objects instead of highly deformable objects such as ropes, fabrics, bags, and liquids, which pose challenges due to their complex configuration spaces, dynamics, and self-occlusions. To achieve greater progress in robot manipulation of [...]
Human-to-Robot Imitation in the Wild
Abstract: In this talk, I approach the problem of learning by watching humans in the wild. While traditional approaches in Imitation and Reinforcement Learning are promising for learning in the real world, they are either sample inefficient or are constrained to lab settings. Meanwhile, there has been a lot of success in processing passive, unstructured human [...]
Safe and Stable Learning for Agile Robots without Reinforcement Learning
Abstract: My research group (https://aerospacerobotics.caltech.edu/) is working to systematically leverage AI and Machine Learning techniques towards achieving safe and stable autonomy of safety-critical robotic systems, such as robot swarms and autonomous flying cars. Another example is LEONARDO, the world's first bipedal robot that can walk, fly, slackline, and skateboard. Stability and safety are often research problems [...]
Towards editable indoor lighting estimation
Abstract: Combining virtual and real visual elements into a single, realistic image requires the accurate estimation of the lighting conditions of the real scene. In recent years, several approaches of increasing complexity---ranging from simple encoder-decoder architecture to more sophisticated volumetric neural rendering---have been proposed. While the quality of automatic estimates has increased, they have the unfortunate downside [...]
Causal Robot Learning for Manipulation
Abstract: Two decades into the third age of AI, the rise of deep learning has yielded two seemingly disparate realities. In one, massive accomplishments have been achieved in deep reinforcement learning, protein folding, and large language models. Yet, in the other, the promises of deep learning to empower robots that operate robustly in real-world environments [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Computational imaging with multiply scattered photons
Abstract: Computational imaging has advanced to a point where the next significant milestone is to image in the presence of multiply-scattered light. Though traditionally treated as noise, multiply-scattered light carries information that can enable previously impossible imaging capabilities, such as imaging around corners and deep inside tissue. The combinatorial complexity of multiply-scattered light transport makes [...]
Dense Reconstruction of Dynamic Structures from Monocular RGB Videos
Abstract: We study the problem of 3D reconstruction of {\em generic} and {\em deformable} objects and scenes from {\em casually-taken} RGB videos, to create a system for capturing the dynamic 3D world. Being able to reconstruct dynamic structures from casual videos allows one to create avatars and motion references for arbitrary objects without specialized devices, [...]
Differentiable Collision Detection
Abstract: Collision detection between objects is critical for simulation, control, and learning for robotic systems. However, existing collision detection routines are inherently non-differentiable, limiting their applications in gradient-based optimization tools. In this talk, I present DCOL: a fast and fully differentiable collision-detection framework that reasons about collisions between a set of composable and highly expressive [...]
Towards $1 robots
Abstract: Robots are pretty great -- they can make some hard tasks easy, some dangerous tasks safe, or some unthinkable tasks possible. And they're just plain fun to boot. But how many robots have you interacted with recently? And where do you think that puts you compared to the rest of the world's people? In [...]
Mental models for 3D modeling and generation
Abstract: Humans have extraordinary capabilities of comprehending and reasoning about our 3D visual world. One particular reason is that when looking at an object or a scene, not only can we see the visible surface, but we can also hallucinate the invisible parts - the amodal structure, appearance, affordance, etc. We have accumulated thousands of [...]
On Interaction, Imitation, and Causation
Abstract: A standard critique of machine learning models (especially neural networks) is that they pick up on spurious correlations rather than causal relationships and are therefore brittle in the face of distribution shift. Solving this problem in full generality is impossible (i.e. there might be no good way to distinguish between the two). However, if [...]
Learning via Visual-Tactile Interaction
Abstract: Humans learn by interacting with their surroundings using all of their senses. The first of these senses to develop is touch, and it is the first way that young humans explore their environment, learn about objects, and tune their cost functions (via pain or treats). Yet, robots are often denied this highly informative and [...]
Carnegie Mellon University
Accelerating Numerical Methods for Optimal Control
Abstract: Many modern control methods, such as model-predictive control, rely heavily on solving optimization problems in real time. In particular, the ability to efficiently solve optimal control problems has enabled many of the recent breakthroughs in achieving highly dynamic behaviors for complex robotic systems. The high computational requirements of these algorithms demand novel algorithms tailor-suited [...]
Tactile SLAM: perception for dexterity via vision-based touch
Abstract: Touch provides a direct window into robot-object interaction, free from occlusion and aliasing faced by visual sensing. Collated tactile perception can facilitate contact-rich tasks---like in-hand manipulation, sliding, and grasping. Here, online estimates of object geometry and pose are crucial for downstream planning and control. With significant advances in tactile sensing, like vision-based touch, a [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Resource Allocation for Learning in Robotics
Abstract: Robots operating in the real world need fast and intelligent decision making systems. While these systems have traditionally consisted of human-engineered behaviors and world models, there has been a lot of interest in integrating them with data-driven components to achieve faster execution and reduce hand-engineering. Unfortunately, these learning-based methods require large amounts of training [...]
What (else) can you do with a robotics degree?
Abstract: In 2004, half-way through my robotics Ph.D., I had a panic-inducing thought: What if I don’t want to build robots for the rest of my life? What can I do with this degree?! Nearly twenty years later, I have some answers: tackle climate change in Latin America, educate Congress about autonomous vehicles, improve how [...]
Complete Codec Telepresence
Abstract: Imagine two people, each of them within their own home, being able to communicate and interact virtually with each other as if they are both present in the same shared physical space. Enabling such an experience, i.e., building a telepresence system that is indistinguishable from reality, is one of the goals of Reality Labs [...]
R.I.P ohyay: experiences building online virtual experiences during the pandemic: what works, what hasn’t, and what we need in the future
Abstract: During the pandemic I helped design ohyay (https://ohyay.co), a creative tool for making and hosting highly customized video-based virtual events. Since Fall 2020 I have personally designed many online events: ranging from classroom activities (lectures, small group work, poster sessions, technical papers PC meetings), to conferences, to virtual offices, to holiday parties involving 100's [...]
Planning with Dynamics by Interleaving Search and Trajectory Optimization
Abstract: Search-based planning algorithms enable autonomous agents like robots to come up with well-reasoned long-horizon plans to achieve a given task objective. They do so by searching over the graph that results from discretizing the state and action space. However, in robotics, several dynamically rich tasks require high-dimensional planning in the continuous space. For such [...]
Physics-informed image translation
Abstract: Generative Adversarial Networks (GANs) have shown remarkable performances in image translation, being able to map source input images to target domains (e.g. from male to female, day to night, etc.). However, their performances may be limited by insufficient supervision, which may be challenging to obtain. In this talk, I will present our recent works [...]
Robots Should Reduce, Reuse, and Recycle
Abstract: Despite numerous successes in deep robotic learning over the past decade, the generalization and versatility of robots across environments and tasks has remained a major challenge. This is because much of reinforcement and imitation learning research trains agents from scratch in a single or a few environments, training special-purpose policies from special-purpose datasets. In [...]
Solving Constraint Tasks with Memory-Based Learning
Abstract: In constraint tasks, the current task state heavily limits what actions are available to an agent. Mechanical constraints exist in many common tasks such as construction, disassembly, and rearrangement and task space constraints exist in an even broader range of tasks. Deep reinforcement learning algorithms have typically struggled with constraint tasks for two main [...]
Weak Multi-modal Supervision for Object Detection and Persuasive Media
Abstract: The diversity of visual content available on the web presents new challenges and opportunities for computer vision models. In this talk, I present our work on learning object detection models from potentially noisy multi-modal data, retrieving complementary content across modalities, transferring reasoning models across dataset boundaries, and recognizing objects in non-photorealistic media. While the [...]
Head-Worn Assistive Teleoperation of Mobile Manipulators
Abstract: Mobile manipulators in the home can provide increased autonomy to individuals with severe motor impairments, who often cannot complete activities of daily living (ADLs) without the help of a caregiver. Teleoperation of an assistive mobile manipulator could enable an individual with motor impairments to independently perform self-care and household tasks, yet limited motor function [...]
Text Classification with Class Descriptions Only
Abstract: In this work, we introduce KeyClass, a weakly-supervised text classification framework that learns from class-label descriptions only, without the need to use any human-labeled documents. It leverages the linguistic domain knowledge stored within pre-trained language models and data programming to automatically label documents. We demonstrate its efficacy and flexibility by comparing it to state-of-the-art [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Multi-Object Tracking in the Crowd
Abstract: In this talk, I will focus on the problem of multi-object tracking in crowded scenes. Tracking within crowds is particularly challenging due to heavy occlusion and frequent crossover between tracking targets. The problem becomes more difficult when we only have noisy bounding boxes due to background and neighboring objects. Existing tracking methods try to [...]
Utilizing Panoptic Segmentation and a Locally-Conditioned Neural Representation to Build Richer 3D Maps
Abstract: Advances in deep-learning based perception and maturation of volumetric RGB-D mapping algorithms have allowed autonomous robots to be deployed in increasingly complex environments. For robust operation in open-world conditions however, perceptual capabilities are still lacking. Limitations of commodity depth sensors mean that complex geometries and textures cannot be reconstructed accurately. Semantic understanding is still [...]
NREC Study Group & Recent Projects
This talk will describe the NREC study process that has been developed as a lower cost of entry work product for potential partners. This is a process that is available for anyone on campus that wants to help their sponsors create viable system concepts and potential development costs before committing to a full development program. [...]
Machine Learning and Model Predictive Control for Adaptive Robotic Systems
Abstract: In this talk I will discuss several different ways in which ideas from machine learning and model predictive control (MPC) can be combined to build intelligent, adaptive robotic systems. I’ll begin by showing how to learn models for MPC that perform well on a given control task. Next, I’ll introduce an online learning perspective on [...]
Magnification-invariant retinal distance estimation using a laser aiming beam
Abstract: Retinal surgery procedures like epiretinal membrane peeling and retinal vein cannulation require surgeons to manipulate very delicate structures in the eye with little room for error. Many robotic surgery systems have been developed to help surgeons and enforce safeguards during these demanding procedures. One essential piece of information that is required to create and [...]
Towards more effective remote execution of exploration operations using multimodal interfaces
Abstract: Remote robots enable humans to explore and interact with environments while keeping them safe from existing harsh conditions (e.g., in search and rescue, deep sea or planetary exploration scenarios). However, the gap between the control station and the remote robot presents several challenges (e.g., situation awareness, cognitive load, perception, latency) for effective teleoperation. Multimodal [...]
Bridging Humans and Generative Models
Abstract: Deep generative models make visual content creation more accessible to novice and professional users alike by automating the synthesis of diverse, realistic content based on a collected dataset. People often use generative models as data-driven sources, making it challenging to personalize a model easily. Currently, personalizing a model requires careful data curation, which is [...]
Learning Visual, Audio, and Cross-Modal Correspondences
Abstract: Today's machine perception systems rely heavily on supervision provided by humans, such as labels and natural language. I will talk about our efforts to make systems that, instead, learn from two ubiquitous sources of unlabeled data: visual motion and cross-modal sensory associations. I will begin by discussing our work on creating unified models for [...]
Impulse considerations for reasoning about intermittent contacts
Abstract: Many of our interactions with the environment involve making and breaking contacts. However, it is not always obvious how one should reason about these intermittent contacts (sequence, timings, locations) in an online and adaptive way. This is particularly relevant in gait generation for legged locomotion control, where it is standard to simply predefine and [...]
Multi-Human 3D Reconstruction from Monocular RGB Videos
Abstract: We study the problem of multi-human 3D reconstruction from RGB videos captured in the wild. Humans have dynamic motion, and reconstructing them in arbitrary settings is key to building immersive social telepresence, assistive humanoid robots, and augmented reality systems. However, creating such a system requires addressing fundamental issues with previous works regarding the data [...]
Learning and Translating Temporal Abstractions across Humans and Robots
Abstract: Humans possess a remarkable ability to learn to perform tasks from a variety of different sources-from language, instructions, demonstration, etc. In each case, they are able to easily extract the high-level strategy to solve the task, such as the recipe of cooking a dish, whilst ignoring irrelevant details, such as the precise shape of [...]
Robust Incremental Smoothing and Mapping
Abstract: In this work we present a method for robust optimization for online incremental Simultaneous Localization and Mapping (SLAM). Due to the NP-Hardness of data association in the presence of perceptual aliasing, tractable (approximate) approaches to data association will produce erroneous measurements. We require SLAM back-ends that can converge to accurate solutions in the presence [...]
Carnegie Mellon University
3D Reconstruction using Differential Imaging
Abstract: 3D reconstruction has been at the core of many computer vision applications, including autonomous driving, visual inspection in manufacturing, and augmented and virtual reality (AR/VR). Because monocular 3D sensing is fundamentally ill-posed, many techniques aiming for accurate reconstruction use multiple captures to solve the inverse problem. Depending on the amount of change in these [...]
Learning with Structured Priors for Robust Robot Manipulation
Abstract: Robust and generalizable robots that can autonomously manipulate objects in semi-structured environments can bring material benefits to society. Data-driven learning approaches are crucial for enabling such systems by identifying and exploiting patterns in semi-structured environments, allowing robots to adapt to novel scenarios with minimal human supervision. However, despite significant prior work in learning for [...]
Learning Parameter-Efficient Quadrotor Dynamics Models
Abstract: Operation of quadrotors through high-speed, high-acceleration maneuvers remains a challenging problem due to the complex aerodynamics in this regime. While standard physical models suffice for control in near-hover conditions, the primary challenge in executing aggressive trajectories is obtaining a model for the quadrotor dynamics that adequately models the aerodynamic effects present, including lift, drag, [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Carnegie Mellon University
Self-Supervising Occlusions For Vision
Abstract: Virtually every scene has occlusions. Even a scene with a single object exhibits self-occlusions - a camera can only view one side of an object (left or right, front or back), or part of the object is outside the field of view. More complex occlusions occur when one or more objects block part(s) of [...]
Multi-Sensor Robot Navigation and Subterranean Exploration
Predicting The Future and Linking the Past: Learning and Constructing Structured Models for Robotic Manipulation
Abstract: Intelligent robotic agents need to reason about the dynamics of their surrounding world, and use such dynamics reasoning to make future predictions for efficient task planning. In addition, it is also desirable for robots to associate past experience in their memories to their current observation, and conduct analogical reasoning to complete tasks at their [...]
Carnegie Mellon University
MSR Thesis Talk: Tushar Kusnur
Title: Search-based Planning for Sensor-based Coverage Abstract: Robots are excellent candidates for the dull, dirty, and dangerous jobs we do not want humans to perform. Today, these include inspection of large areas or structures, post-disaster assessment, and surveillance. Assessing the aftermath of the recent Fern Hollow bridge collapse in Pittsburgh is one such example. Many [...]
Human-in-the-loop Model Creation
Abstract: Deep generative models make visual content creation more accessible to novice users by automating the synthesis of diverse, realistic content based on a collected dataset. However, the current machine learning approaches miss several elements of the creative process -- the ability to synthesize things that go far beyond the data distribution and everyday experience, [...]
Robotic Interestingness via Human-Informed Few-Shot Object Detection
Abstract: Interestingness recognition is crucial for decision making in autonomous exploration for mobile robots. Previous methods proposed an unsupervised online learning approach that can adapt to environments and detect interesting scenes quickly, but lack the ability to adapt to human-informed interesting objects. To solve this problem, we introduce a human-interactive framework, AirInteraction, that can detect [...]
Towards a formal theory of deep optimisation
Abstract: Precise understanding of the training of deep neural networks is largely restricted to architectures such as MLPs and cost functions such as the square cost, which is insufficient to cover many practical settings. In this talk, I will argue for the necessity of a formal theory of deep optimisation. I will describe such a [...]
Carnegie Mellon University
MSR Thesis Talk: Nikhil Angad Bakshi
Title: See But Don't Be Seen: Towards Stealthy Active Search in Heterogeneous Multi-Robot Systems Abstract: Robotic solutions for quick disaster response are essential to ensure minimal loss of life, especially when the search area is too dangerous or too vast for human rescuers. We model this problem as an asynchronous multi-agent active-search task where each robot aims [...]
Carnegie Mellon University
MSR Thesis Talk: Yves Georgy Daoud
Title: Spatial Tasking in Human-Robot Collaborative Exploration Abstract: This work develops a methodology for collaborative human-robot exploration that leverages implicit coordination. Most autonomous single- and multi-robot exploration systems require a remote operator to provide explicit guidance to the robot team. Few works consider how to integrate the human partner alongside robots to provide guidance in the [...]
Carnegie Mellon University
MSR Thesis Talk: Ambareesh Revanur
Title: Towards Video-based Physiology Estimation Abstract: RGB-video based human physiology estimation has a wide range of practical applications in telehealth, sports and deep fake detection. Therefore, researchers in the community have collected several video datasets and have advanced new methods over the years. In this dissertation, we study these methods extensively and aim to address the [...]
Carnegie Mellon University
MSR Thesis Talk: Raghavv Goel
Title: Automating Ultrasound Based Vascular Access Abstract: Timely care of trauma patients is important to prevent casualties in resource-limited regions such as the battlefield. In order to treat such trauma using point of care diagnosis, medical practitioners typically use an ultrasound for vascular access or detection of subcutaneous splinters for providing critical care. The problem here is two-fold: [...]
Carnegie Mellon University
MSR Thesis Talk: Mayank Singh
Title: Analogical Networks: Memory-Modulated In-Context 3D Parsing Abstract: Recent advances in the applications of deep neural networks to numerous visual perception tasks have shown excellent performance. However, this generally requires access to large amount of training samples and hence one persistent challenge is the setting of few-shot learning. In most existing works, a separate parametric neural [...]
Carnegie Mellon University
Learning with Diverse Forms of Imperfect and Indirect Supervision
Abstract: Powerful Machine Learning (ML) models trained on large, annotated datasets have driven impressive advances in fields including natural language processing and computer vision. In turn, such developments have led to impactful applications of ML in areas such as healthcare, e-commerce, and predictive maintenance. However, obtaining annotated datasets at the scale required for training high [...]
Carnegie Mellon University
MSR Thesis Talk: Yutian Lei
Title: ARC: AdveRsarial Calibration between Modalities Abstract: Advances in computer vision and machine learning techniques have led to flourishing success in RGB-input perception tasks, which has also opened unbounded possibilities for non-RGB-input perception tasks, such as object detection from wireless signals, point clouds, and infrared light. However, compared to the matured development pipeline of RGB-input [...]
FRIDA: Supporting Artistic Communication in Real-World Image Synthesis Through Diverse Input Modalities
Abstract: FRIDA, a Framework and Robotics Initiative for Developing Arts, is a robot painting system designed to translate an artist's high-level intentions into real world paintings. FRIDA can paint from combinations of input images, text, style examples, sounds, and sketches. Planning is performed in a differentiable, simulated environment created using real data from the robot [...]
Perception for High-Speed Off-Road Driving
Abstract: On-road autonomous driving has seen rapid progress in recent years with driverless vehicles being tested in various cities worldwide. However, this progress is limited to cities with well-established infrastructure and has yet to transfer to off-road regimes with unstructured environments and few paved roads. Advances in high-speed and reliable autonomous off-road driving can unlock [...]
Continual Learning of Compositional Skills for Robust Robot Manipulation
Abstract: Real world robots need to continuously learn new manipulation tasks in a lifelong learning manner. These new tasks often share sub-structures (in the form of sub-tasks, controllers) with previously learned tasks. To utilize these shared sub-structures, we explore a compositional and object-centric approach to learn manipulation tasks. While compositionality in robot manipulation can manifest [...]
Junior Faculty PhD Admissions Process Presentation
A presentation lead by David Wettergreen regarding the PhD Admission process.
Carnegie Mellon University
MSR Thesis Talk: Samuel Ong
Title: Data-Driven Slip Model for Improved Localization and Path Following applied to Lunar Micro-Rovers Abstract Micro-lunar rovers need to solve a slew of challenges on the Moon, with no human intervention. One such challenge is the need to know their location in order to navigate and build maps. However, localization is challenging on the moon due [...]
Computational Interferometric Imaging
Abstract: Imaging systems typically accumulate photons that, as they travel from a light source to a camera, follow multiple different paths and interact with several scene objects. This multi-path accumulation process confounds the information that is available in captured images about the scene and makes using these images to infer properties of scene objects, such [...]
Making AI trustworthy and understandable by clinicians
Abstract: Understandable-AI techniques facilitate to use of AI as a tool by human experts, giving humans insight into how AI decisions are made thereby helping experts discern which AI predictions should or shouldn’t be trusted. Understandable techniques may be especially useful for applications with insufficient validation data for regulatory approval, for which human experts must remain the final decision [...]
Towards Interactive Radiance Fields
Abstract: Over the last years, the fields of computer vision and computer graphics have increasingly converged. Using the exact same processes to model appearance during 3D reconstruction and rendering has shown tremendous benefits, especially when combined with machine learning techniques to model otherwise hard-to-capture or -simulate optical effects. In this talk, I will give an [...]
Robust and Context-Aware Real-Time Collaborative Robot Handling with Dynamic Gesture Commands
Abstract: Real-time collaborative robot (cobot) handling is a task where the cobot maneuvers an object under human dynamic gesture commands. Enabling dynamic gesture commands is useful when the human needs to avoid direct contact with the robot or the object handled by the robot. However, the key challenge lies in the heterogeneity in human behaviors [...]
Learning Representations for Interactive Robotics
In this talk, I will be discussing the role of learning representations for robots that interact with humans and robots that interactively learn from humans through a few different vignettes. I will first discuss how bounded rationality of humans guided us towards developing learned latent action spaces for shared autonomy. It turns out this “bounded rationality” is not a [...]
Motion Planning Around Obstacles with Graphs of Convex Sets
Abstract: In this talk, I'll describe a new approach to planning that strongly leverages both continuous and discrete/combinatorial optimization. The framework is fairly general, but I will focus on a particular application of the framework to planning continuous curves around obstacles. Traditionally, these sort of motion planning problems have either been solved by trajectory optimization [...]
RE2 Robotics: from RI spinout to Acquisition
Abstract: It was July 2001. Jorgen Pedersen founded RE2 Robotics. It was supposed to be a temporary venture while he figured out his next career move. But the journey took an unexpected course. RE2 became a leading developer of mobile manipulation systems. Fast forward to 2022, RE2 Robotics exited via an acquisition to Sarcos Technology and [...]
Equivalent Policy Sets for Learning Aligned Models and Abstractions
Abstract: Recent successes in model-based reinforcement learning (MBRL) have demonstrated the enormous value that learned representations of environmental dynamics (i.e., models) can impart to autonomous decision making. While a learned model can never perfectly represent the dynamics of complex environments, models that are accurate in the "right” ways may still be highly useful for decision [...]
Dynamic Route Guidance in Vehicle Networks by Simulating Future Traffic Patterns
Abstract: Roadway congestion leads to wasted time and money and environmental damage. Since adding more roadway capacity is often not possible in urban environments, it is becoming more important to use existing road networks more efficiently. Toward this goal, recent research in real-time, schedule-driven intersection control has shown an ability to significantly reduce the delays [...]
Enabling Self-sufficient Robot Learning
Abstract: Autonomous exploration and data-efficient learning are important ingredients for helping machine learning handle the complexity and variety of real-world interactions. In this talk, I will describe methods that provide these ingredients and serve as building blocks for enabling self-sufficient robot learning. First, I will outline a family of methods that facilitate active global exploration. [...]
Adaptive Robotic Assistance through Observations of Human Behavior
Abstract: Assistive robots should take actions that support people's goals. This is especially true as robots enter into environments where personal agency is paramount, such as a person's home. Home environments have a wide variety of "optimal' solutions that depend on personal preference, making it difficult for a robot to know the goal it should [...]
Perceiving Objects and Interactions in 3D
Abstract: We observe and interact with myriad of objects in our everyday lives, from cups and bottles to hammers and tennis rackets. In this talk, I will outline our group’s efforts towards understanding these objects and our everyday interactions with them in 3D. I will first focus on scaling 3D prediction for isolated objects across [...]
Understanding the Physical World from Images
If I show you a photo of a place you have never been to, you can easily imagine what you could do in that picture. Your understanding goes from the surfaces you see to the ones you know are there but cannot see, and can even include reasoning about how interaction would change the scene. [...]