Teaching Robots to Drive: Scalable Policy Improvement via Human Feedback
Abstract: A long-standing problem in autonomous driving is grappling with the long-tail of rare scenarios for which little or no data is available. Although learning-based methods scale with data, it is unclear that simply ramping up data collection will eventually make this problem go away. Approaches which rely on simulation or world modeling offer some [...]
Exploration for Continually Improving Robots
Abstract: Data-driven learning is a powerful paradigm for enabling robots to learn skills. Current prominent approaches involve collecting large datasets of robot behavior via teleoperation or simulation, to then train policies. For these policies to generalize to diverse tasks and scenes, there is a large burden placed on constructing a rich initial dataset, which is [...]
Unlocking Magic: Personalization of Diffusion Models for Novel Applications
Abstract: Since the recent advent of text-to-image diffusion models for high-quality realistic image generation, a plethora of creative applications have suddenly become within reach. I will present my work at Google where I have attempted to unlock magical applications by proposing simple techniques that act on these large text-to-image diffusion models. Particularly, a large class of [...]
Domesticating Soft Robotics Research and Development with Accessible Biomaterials
Abstract: Current trends in robotics design and engineering are typically focused on high value applications where high performance, precision, and robustness take precedence over cost, accessibility, and environmental impact. In this paradigm, the capability landscape of robotics is largely shaped by access to capital and the promise of economic return. This thesis explores an alternative [...]
Understanding and acting in the 4D world
Abstract: As humans, we are constantly interacting with and observing a three-dimensional dynamic world; where objects around us change state as they move or are moved, and we, ourselves, move for navigation and exploration. Such an interaction between a dynamic environment and a dynamic ego-agent is complex to model as an ego-agent's perception of the [...]
Using mechanical intelligence to create adaptable robots
Abstract: Currently deployed robots are primarily rigid machines that perform repetitive, controlled tasks in highly constrained or open environments such as factory floors, warehouses, or fields. There is an increasing demand for more adaptable, mobile, and flexible robots that can manipulate or move through unstructured and dynamic environments. My vision is to create robots that [...]
Instant Visual 3D Worlds Through Split-Lohmann Displays
Abstract: Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming [...]
Remote Rendering and 3D Streaming for Resource-Constrained XR Devices
Abstract: An overview of the motivation and challenges for remote rendering and real-time 3D video streaming on XR headsets. Bio: Edward is a third year PhD student in the ECE department interested in computer systems for VR/AR devices. Homepage: https://users.ece.cmu.edu/~elu2/ Sponsored in part by: Meta Reality Labs Pittsburgh
Vectorizing Raster Signals for Spatial Intelligence
Abstract: This seminar will focus on how vectorized representations can be generated from raster signals to enhance spatial intelligence. I will discuss the core methodology behind this transformation, with a focus on applications in AR/VR and robotics. The seminar will also briefly cover follow-up work that explores rigging and re-animating objects from casual single videos [...]
Learning Universal Humanoid Control
Abstract: Since infancy, humans acquire motor skills, behavioral priors, and objectives by learning from their caregivers. Similarly, as we create humanoids in our own image, we aspire for them to learn from us and develop universal physical and cognitive capabilities that are comparable to, or even surpass, our own. In this thesis, we explore how [...]
Generative Robotics: Self-Supervised Learning for Human-Robot Collaborative Creation
Abstract: While Generative AI has shown breakthroughs in recent years in generating new digital contents such as images or 3D models from high-level goal inputs like text, Robotics technologies have not, instead focusing on low-level goal inputs. We propose Generative Robotics, as a new field of robotics which combines the high-level goal input abilities of [...]
3D Video Models through Point Tracking, Reconstructing and Forecasting
Abstract: 3D scene understanding from 2D video is essential for enabling advanced applications such as autonomous driving, robotics, virtual reality, and augmented reality. These fields rely on accurate 3D spatial awareness and dynamic interaction modeling to navigate complex environments, manipulate objects, and provide immersive experiences. Unlike 2D, 3D training data is much less abundant, which [...]
What Makes Learning to Control Easy or Hard?
Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine [...]
Towards a Robot Generalist through In-Context Learning and Abstractions
Abstract: The goal of this thesis is to discover AI processes that enhance cross-domain and cross-task generalization in intelligent robot agents. Unlike the dominant approach in contemporary robot learning, which pursues generalization primarily through scaling laws (increasing data and model size), we focus on identifying the best abstractions and representations in both perception and policy [...]
Vision-based Human Motion Modeling and Analysis
Abstract: Modern computer vision has achieved remarkable success in tasks such as detecting, segmenting, and estimating the pose of humans in images and videos, reaching or even surpassing human-level performance. However, they still face significant challenges in predicting and analyzing future human motion. This thesis explores how vision-based solutions can enhance the fidelity and accuracy [...]
Stochastic Graphics Primitives
Abstract: For decades computer graphics has successfully leveraged stochasticity to enable both expressive volumetric representations of participating media like clouds and efficient Monte Carlo rendering of large scale, complex scenes. In this talk, we’ll explore how these complementary forms of stochasticity (representational and algorithmic) may be applied more generally across computer graphics and vision. In [...]