Estimating Object Importance and Modeling Driver’s Situational Awareness for Intelligent Driving
Abstract: The ability to identify important objects in a complex and dynamic driving environment can help assistive driving systems alert drivers. These assistance systems also require a model of the drivers' situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts. This thesis builds towards such intelligent driving [...]
Carnegie Mellon University
AI for Human Mobility
Abstract This talk will describe a series of AI and robotics projects aimed at helping people independently move through cities and buildings. Projects include a deployed personalized transit information app, guide robots for people who are blind, and an integrated AI system that assists blind users with guidance and exploration. Specific findings will be presented [...]
Learning for Perception and Strategy: Adaptive Omnidirectional Stereo Vision and Tactical Reinforcement Learning
Abstract: Multi-view stereo omnidirectional distance estimation usually needs to build a cost volume with many hypothetical distance candidates. The cost volume building process is often computationally heavy considering the limited resources a mobile robot has. We propose a new geometry-informed way of distance candidates selection method which enables the use of a very small number [...]
Online-Adaptive Self-Supervised Learning with Visual Foundation Models for Autonomous Off-Road Driving
Abstract: Autonomous robot navigation in off-road environments currently presents a number of challenges. The lack of structure makes it difficult to handcraft geometry-based heuristics that are robust to the diverse set of scenarios the robot might encounter. Many of the learned methods that work well in urban scenarios require massive amounts of hand-labeled data, but [...]
Multimodal Representations for Adaptable Robot Policies in Human-Inhabited Spaces
Abstract: Human beings sense and express themselves through multiple modalities. To capture multimodal ways of human communication, I want to build adaptable robot policies that infer task pragmatics from video and language prompts, reason about sounds and other sensors, take actions, and learn mannerisms of interacting with people and objects. Existing solutions for robot policies [...]
Interleaving Discrete Search and Continuous Optimization for Kinodynamic Motion Planning
Abstract: Motion planning for dynamically complex robotic tasks requires explicit reasoning within constraints on velocity, acceleration, force/torque, and kinematics such as avoiding obstacles. To meet these constraints, planning algorithms must simultaneously make high-level discrete decisions and low-level continuous decisions. For example, pushing a heavy object involves making discrete decisions about contact locations and continuous decisions [...]
Goal-Expressive Movement for Social Navigation: Where and When to Behave Legibly
Abstract: Robots often need to communicate their navigation goals to assist observers in anticipating the robot's future actions. Enabling observers to infer where a robot is going from its movements is particularly important as robots begin to share workplaces, sidewalks, and social spaces with humans. We can use legible motion, or movements that use intentional [...]
Eye Gaze for Intelligent Driving
Abstract: Intelligent vehicles have been proposed as one path to increasing traffic safety and reducing on-road crashes. Driving “intelligence” today takes many forms, ranging from simple blind spot occupancy or forward collision warnings to distance-aware cruise and all the way to full driving autonomy in certain situations. Primarily, these methods are outward-facing and operate on [...]
AI-CARING
AI-CARING is an NSF-sponsored institute, led by Georgia Tech, whose mission is to investigate, develop and evaluate AI technologies to help older adults live independently. The Institute focuses on providing reminders to the older adults and alerting caregivers when necessary, assisting older adults with tasks such as meal preparation, motivating them to exercise, providing conversational [...]
Learning to Perceive and Predict Everyday Interactions
Abstract: This thesis aims to build computer systems to understand everyday hand-object interactions in the physical world – both perceiving ongoing interactions in 3D space and predicting possible interactions. This ability is crucial for applications such as virtual reality, robotic manipulations, and augmented reality. The problem is inherently ill-posed due to the challenges of one-to-many [...]
Sensorized Soft Material Systems with Integrated Electronics and Computing
Abstract: The integration of soft and multifunctional materials in emerging technologies is becoming more widespread due to their ability to enhance or improve functionality in ways not possible using typical rigid alternatives. This trend is evident in various fields. For example, wearable technologies are increasingly designed using soft materials to improve modulus compatibility with biological [...]
Deep Learning for Tactile Sensing: Development to Deployment
Abstract: The role of sensing is widely acknowledged for robots interacting with the physical environment. However, few contemporary sensors have gained widespread use among roboticists. This thesis proposes a framework for incorporating sensors into a robot learning paradigm, from development to deployment, through the lens of ReSkin -- a versatile and scalable magnetic tactile sensor. [...]
Learning and Translating Temporal Abstractions of Behaviour across Humans and Robots
Abstract: Humans are remarkably adept at learning to perform tasks by imitating other people demonstrating these tasks. Key to this is our ability to reason abstractly about the high-level strategy of the task at hand (such as the recipe of cooking a dish) and the behaviours needed to solve this task (such as the behaviour [...]
Towards Underwater 3D Visual Perception
Abstract: With modern robotic technologies, seafloor imageries have become more accessible to both researchers and the public. This thesis leverages deep learning and 3D vision techniques to deliver valuable information from seafloor image observations. Despite the widespread use of deep learning and 3D vision algorithms across various fields, underwater imaging presents unique challenges, such as [...]
Assistive value alignment using in-situ naturalistic human behaviors
Abstract: As collaborative robots are increasingly deployed in personal environments, such as the home, it is critical they take actions to complete tasks consistent with personal preferences. Determining personal preferences for completing household chores, however, is challenging. Many household chores, such as setting a table or loading a dishwasher, are sequential and open-vocabulary, creating a [...]
Ice Cream Social
Join RISO at the Ice Cream Social robolounge @5-7 Wednesday September 4th Free Entry
Sampling and Signal-Processing for High-Dimensional Visual Appearance in Computer Graphics and Vision
Abstract: Many problems in computer graphics and vision, such as acquiring images of a scene to enable synthesis of novel views from many directions for virtual reality, computing realistic images by integrating lighting from many different incident directions across a range of scene pixels and viewing angles, or acquiring and modeling the appearance of realistic materials [...]
Teaching Robots to Drive: Scalable Policy Improvement via Human Feedback
Abstract: A long-standing problem in autonomous driving is grappling with the long-tail of rare scenarios for which little or no data is available. Although learning-based methods scale with data, it is unclear that simply ramping up data collection will eventually make this problem go away. Approaches which rely on simulation or world modeling offer some [...]
Exploration for Continually Improving Robots
Abstract: Data-driven learning is a powerful paradigm for enabling robots to learn skills. Current prominent approaches involve collecting large datasets of robot behavior via teleoperation or simulation, to then train policies. For these policies to generalize to diverse tasks and scenes, there is a large burden placed on constructing a rich initial dataset, which is [...]
Unlocking Magic: Personalization of Diffusion Models for Novel Applications
Abstract: Since the recent advent of text-to-image diffusion models for high-quality realistic image generation, a plethora of creative applications have suddenly become within reach. I will present my work at Google where I have attempted to unlock magical applications by proposing simple techniques that act on these large text-to-image diffusion models. Particularly, a large class of [...]
Domesticating Soft Robotics Research and Development with Accessible Biomaterials
Abstract: Current trends in robotics design and engineering are typically focused on high value applications where high performance, precision, and robustness take precedence over cost, accessibility, and environmental impact. In this paradigm, the capability landscape of robotics is largely shaped by access to capital and the promise of economic return. This thesis explores an alternative [...]
Understanding and acting in the 4D world
Abstract: As humans, we are constantly interacting with and observing a three-dimensional dynamic world; where objects around us change state as they move or are moved, and we, ourselves, move for navigation and exploration. Such an interaction between a dynamic environment and a dynamic ego-agent is complex to model as an ego-agent's perception of the [...]
Using mechanical intelligence to create adaptable robots
Abstract: Currently deployed robots are primarily rigid machines that perform repetitive, controlled tasks in highly constrained or open environments such as factory floors, warehouses, or fields. There is an increasing demand for more adaptable, mobile, and flexible robots that can manipulate or move through unstructured and dynamic environments. My vision is to create robots that [...]
Instant Visual 3D Worlds Through Split-Lohmann Displays
Abstract: Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming [...]
Remote Rendering and 3D Streaming for Resource-Constrained XR Devices
Abstract: An overview of the motivation and challenges for remote rendering and real-time 3D video streaming on XR headsets. Bio: Edward is a third year PhD student in the ECE department interested in computer systems for VR/AR devices. Homepage: https://users.ece.cmu.edu/~elu2/ Sponsored in part by: Meta Reality Labs Pittsburgh
Vectorizing Raster Signals for Spatial Intelligence
Abstract: This seminar will focus on how vectorized representations can be generated from raster signals to enhance spatial intelligence. I will discuss the core methodology behind this transformation, with a focus on applications in AR/VR and robotics. The seminar will also briefly cover follow-up work that explores rigging and re-animating objects from casual single videos [...]
Learning Universal Humanoid Control
Abstract: Since infancy, humans acquire motor skills, behavioral priors, and objectives by learning from their caregivers. Similarly, as we create humanoids in our own image, we aspire for them to learn from us and develop universal physical and cognitive capabilities that are comparable to, or even surpass, our own. In this thesis, we explore how [...]
Generative Robotics: Self-Supervised Learning for Human-Robot Collaborative Creation
Abstract: While Generative AI has shown breakthroughs in recent years in generating new digital contents such as images or 3D models from high-level goal inputs like text, Robotics technologies have not, instead focusing on low-level goal inputs. We propose Generative Robotics, as a new field of robotics which combines the high-level goal input abilities of [...]
3D Video Models through Point Tracking, Reconstructing and Forecasting
Abstract: 3D scene understanding from 2D video is essential for enabling advanced applications such as autonomous driving, robotics, virtual reality, and augmented reality. These fields rely on accurate 3D spatial awareness and dynamic interaction modeling to navigate complex environments, manipulate objects, and provide immersive experiences. Unlike 2D, 3D training data is much less abundant, which [...]
What Makes Learning to Control Easy or Hard?
Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine [...]
Towards a Robot Generalist through In-Context Learning and Abstractions
Abstract: The goal of this thesis is to discover AI processes that enhance cross-domain and cross-task generalization in intelligent robot agents. Unlike the dominant approach in contemporary robot learning, which pursues generalization primarily through scaling laws (increasing data and model size), we focus on identifying the best abstractions and representations in both perception and policy [...]
Vision-based Human Motion Modeling and Analysis
Abstract: Modern computer vision has achieved remarkable success in tasks such as detecting, segmenting, and estimating the pose of humans in images and videos, reaching or even surpassing human-level performance. However, they still face significant challenges in predicting and analyzing future human motion. This thesis explores how vision-based solutions can enhance the fidelity and accuracy [...]
Stochastic Graphics Primitives
Abstract: For decades computer graphics has successfully leveraged stochasticity to enable both expressive volumetric representations of participating media like clouds and efficient Monte Carlo rendering of large scale, complex scenes. In this talk, we’ll explore how these complementary forms of stochasticity (representational and algorithmic) may be applied more generally across computer graphics and vision. In [...]
Recent Progress in Graph-Search Methods for Multi-Robot-Arm Motion Planning
Abstract: An exciting frontier in robotic manipulation is the use of multiple arms at once. However, planning concurrent motions is a challenging task using current methods. A major obstacle is the high-dimensional state space of this planning problem, which renders many traditional motion planning algorithms impractical. This opens the door for alternatives to the common [...]
Physical Process-Informed Mapping for Robotic Exploration
Abstract: Mobile robots used for information gathering tasks rely on dense, predictive mapping of large-scale regions to determine where to take measurements. Current approaches to mapping commonly rely on Gaussian process regression to spatially correlate data, extrapolate from sparse samples, and estimate uncertainty. However, these approaches do not incorporate meaningful information about physical processes that [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Agenda was sent via a calendar invite.
Can Robots Based on Musculoskeletal Designs Better Interact With the World?
Abstract: Living robots represent a new frontier in engineering materials for robotic systems, incorporating biological living cells and synthetic materials into their design. These bio-hybrid robots are dynamic and intelligent, potentially harnessing living matter’s capabilities, such as growth, regeneration, morphing, biodegradation, and environmental adaptation. Such attributes position bio-hybrid devices as a transformative force in robotics [...]