Seminar
Towards Energy-Efficient Techniques and Applications for Universal AI Implementation
Abstract: The rapid advancement of large-scale language and vision models has significantly propelled the AI domain. We now see AI enriching everyday life in numerous ways – from community and shared virtual reality experiences to autonomous vehicles, healthcare innovations, and accessibility technologies, among others. Central to these developments is the real-time implementation of high-quality deep [...]
Structure-from-Motion Meets Self-supervised Learning
Abstract: How to teach machine to perceive 3D world from unlabeled videos? We will present new solution via incorporating Structure-from-Motion (SfM) into self-supervised model learning. Given RGB inputs, deep models learn to regress depth and correspondence. With the two inputs, we introduce a camera localization algorithm that searches for certified global optimal poses. However, the [...]
Toward Human-Centered XR: Bridging Cognition and Computation
Abstract: Virtual and Augmented Reality enables unprecedented possibilities for displaying virtual content, sensing physical surroundings, and tracking human behaviors with high fidelity. However, we still haven't created "superhumans" who can outperform what we are in physical reality, nor a "perfect" XR system that delivers infinite battery life or realistic sensation. In this talk, I will discuss some of our [...]
Carnegie Mellon Graphics Colloquium: C. Karen Liu : Building Large Models for Human Motion
Building Large Models for Human Motion Large generative models for human motion, analogous to ChatGPT for text, will enable human motion synthesis and prediction for a wide range of applications such as character animation, humanoid robots, AR/VR motion tracking, and healthcare. This model would generate diverse, realistic human motions and behaviors, including kinematics and dynamics, [...]
Teaching a Robot to Perform Surgery: From 3D Image Understanding to Deformable Manipulation
Abstract: Robot manipulation of rigid household objects and environments has made massive strides in the past few years due to the achievements in computer vision and reinforcement learning communities. One area that has taken off at a slower pace is in manipulating deformable objects. For example, surgical robotics are used today via teleoperation from a [...]
Zeros for Data Science
Abstract: The world around us is neither totally regular nor completely random. Our and robots’ reliance on spatiotemporal patterns in daily life cannot be over-stressed, given the fact that most of us can function (perceive, recognize, navigate) effectively in chaotic and previously unseen physical, social and digital worlds. Data science has been promoted and practiced [...]
Emotion perception: progress, challenges, and use cases
Abstract: One of the challenges Human-Centric AI systems face is understanding human behavior and emotions considering the context in which they take place. For example, current computer vision approaches for recognizing human emotions usually focus on facial movements and often ignore the context in which the facial movements take place. In this presentation, I will [...]
Foundation Models for Robotic Manipulation: Opportunities and Challenges
Abstract: Foundation models, such as GPT-4 Vision, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and [...]
Learning with Less
Abstract: The performance of an AI is nearly always associated with the amount of data you have at your disposal. Self-supervised machine learning can help – mitigating tedious human supervision – but the need for massive training datasets in modern AI seems unquenchable. Sometimes it is not the amount of data, but the mismatch of [...]
Why We Should Build Robot Apprentices And Why We Shouldn’t Do It Alone
Abstract: For robots to be able to truly integrate human-populated, dynamic, and unpredictable environments, they will have to have strong adaptive capabilities. In this talk, I argue that these adaptive capabilities should leverage interaction with end users, who know how (they want) a robot to act in that environment. I will present an overview of [...]
Toward an ImageNet Moment for Synthetic Data
Abstract: Data, especially large-scale labeled data, has been a critical driver of progress in computer vision. However, many important tasks remain starved of high-quality data. Synthetic data from computer graphics is a promising solution to this challenge, but still remains in limited use. This talk will present our work on Infinigen, a procedural synthetic data [...]
Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World
Abstract: We show that imitating shortest-path planners in simulation produces Stretch RE-1 robotic agents that, given language instructions, can proficiently navigate, explore, and manipulate objects in both simulation and in the real world using only RGB sensors (no depth maps or GPS coordinates). This surprising result is enabled by our end-to-end, transformer-based, SPOC architecture, powerful [...]
Teruko Yata Memorial Lecture
Human-Centric Robots and How Learning Enables Generality Abstract: Humans have dreamt of robot helpers forever. What's new is that this dream is becoming real. New developments in AI, building on foundations of hardware and passive dynamics, enable vastly improved generality. Robots can step out of highly structured environments and become more human-centric: operating in human [...]
Creating robust deep learning models involves effectively managing nuisance variables
Abstract: Over the past decade, we have witnessed significant advances in capabilities of deep neural network models in vision and machine learning. However, issues related to bias, discrimination, and fairness in general, have received a great deal of negative attention (e.g., mistakes in surveillance and animal-human confusion of vision models). But bias in AI models [...]
Reduced-Gravity Flights and Field Testing for Lunar and Planetary Rovers
Abstract: As humanity returns to the Moon and is developing outposts and related infrastructure, we need to understand how robots and work machines will behave in this harsh environment. It is challenging to find representative testing environments on Earth for Lunar and planetary rovers. To investigate the effects of reduced-gravity on interactions with granular terrains, [...]
Shedding Light on 3D Cameras
Abstract: The advent (and commoditization) of low-cost 3D cameras is revolutionizing many application domains, including robotics, autonomous navigation, human computer interfaces, and recently even consumer devices such as cell-phones. Most modern 3D cameras (e.g., LiDAR) are active; they consist of a light source that emits coded light into the scene, i.e., its intensity is modulated over [...]
Where’s RobotGPT?
Abstract: The last years have seen astonishing progress in the capabilities of generative AI techniques, particularly in the areas of language and visual understanding and generation. Key to the success of these models are the use of image and text data sets of unprecedented scale along with models that are able to digest such large [...]
Neural Field Representations of Mobile Computational Photography
Abstract: Burst imaging pipelines allow cellphones to compensate for less-than-ideal optical and sensor hardware by computationally merging multiple lower-quality images into a single high-quality output. The main challenge for these pipelines is compensating for pixel motion, estimating how to align and merge measurements across time while the user's natural hand tremor involuntarily shakes the camera. In [...]
Robot Learning by Understanding Egocentric Videos
Abstract: True gains of machine learning in AI sub-fields such as computer vision and natural language processing have come about from the use of large-scale diverse datasets for learning. In this talk, I will discuss how we can leverage large-scale diverse data in the form of egocentric videos (first-person videos of humans conducting different tasks) [...]
Special Seminar
Speaker: Abhisesh Silwal Title: Robotics and AI for Sustainable Agriculture Abstract: Production agriculture plays a critical role in our lives, providing food security and enabling sustainability. Despite its immense importance, it currently faces many challenges including shortage of farmworkers, increasing production costs, excess use of herbicides just to name a few. Robotics and artificial intelligence-based [...]
Passive Ultra-Wideband Single-Photon Imaging
Abstract: High-speed light sources, fast cameras, and depth sensors have made it possible to image dynamic phenomena occurring in ever smaller time intervals with the help of actively-controlled light sources and synchronization. Unfortunately, while these techniques do capture ultrafast events, they cannot simultaneously capture slower ones too. I will discuss our recent work on passive ultra-wideband [...]
Simulation-Driven Soft Robotics
Abstract: Soft-bodied robots present a compelling solution for navigating tight spaces and interacting with unknown obstacles, with potential applications in inspection, medicine, and AR/VR. Yet, even after a decade, soft robots remain largely in the prototype phase without scaling to the tasks where they show the most promise. These systems are difficult to design and [...]
From Understanding to Interacting with the 3D World
Abstract: Understanding the 3D structure of real-world environments is a fundamental challenge in machine perception, critical for applications spanning robotic navigation, content creation, and mixed reality scenarios. In recent years, machine learning has undergone rapid advancements; however, in the 3D domain, such data-driven learning is often very challenging under limited 3D/4D data availability. In this talk, [...]
Learned Imaging Systems
Abstract: Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Of particular interest in recent years has been the development of end-to-end learned “Deep Optics” systems that use differentiable optical simulation in combination with backpropagation to simultaneously learn optical design and deep network post-processing for applications such as hyperspectral [...]
ARPA-H and America’s Health: Pursuing High-Risk/High-Reward Research to Improve Health Outcomes for All
Dr. Andy Kilianski will provide an overview of ARPA-H, a new U.S. government funding agency pursuing R&D for health challenges. He will review the unique niche occupied by ARPA-H within the Department of Health and Human Services and how ARPA-H is already partnering with academia and industry to transform health outcomes across the country. Discussion [...]
Robots Crossing Boundaries
Abstract: Over the last 50 years, autonomous robots have made the leap from being novel research contributions in university labs to becoming the fundamental technology upon which companies are built. While they traditionally have belonged to the engineering and computer science disciplines, robots have now crossed into other areas of study and research - making impacts in oceanography, geology, archaeology, biomechanics and biology. [...]
Sampling and Signal-Processing for High-Dimensional Visual Appearance in Computer Graphics and Vision
Abstract: Many problems in computer graphics and vision, such as acquiring images of a scene to enable synthesis of novel views from many directions for virtual reality, computing realistic images by integrating lighting from many different incident directions across a range of scene pixels and viewing angles, or acquiring and modeling the appearance of realistic materials [...]
Unlocking Magic: Personalization of Diffusion Models for Novel Applications
Abstract: Since the recent advent of text-to-image diffusion models for high-quality realistic image generation, a plethora of creative applications have suddenly become within reach. I will present my work at Google where I have attempted to unlock magical applications by proposing simple techniques that act on these large text-to-image diffusion models. Particularly, a large class of [...]
Instant Visual 3D Worlds Through Split-Lohmann Displays
Abstract: Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming [...]
What Makes Learning to Control Easy or Hard?
Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine [...]
Soft Wearable Haptic Devices for Ubiquitous Communication
Abstract: Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmitted through touch is limited in large [...]
Robots That Know When They Don’t Know
Abstract: Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that [...]