Seminar
Can Robots Based on Musculoskeletal Designs Better Interact With the World?
Abstract: Living robots represent a new frontier in engineering materials for robotic systems, incorporating biological living cells and synthetic materials into their design. These bio-hybrid robots are dynamic and intelligent, potentially harnessing living matter’s capabilities, such as growth, regeneration, morphing, biodegradation, and environmental adaptation. Such attributes position bio-hybrid devices as a transformative force in robotics [...]
Soft Wearable Haptic Devices for Ubiquitous Communication
Abstract: Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmitted through touch is limited in large [...]
Reconstructing Everything
Abstract: The presentation will be about a long-running, perhaps quixotic effort to reconstruct all of the world's structures in 3D from Internet photos, why this is challenging, and why this effort might be useful in the era of generative AI. Bio: Noah Snavely is a Professor in the Computer Science Department at Cornell University [...]
Using Robotics, Imaging and AI to Tackle Apple Fruit Production: Crop Harvest and Fire Blight Disease, The Two Major Bottlenecks for U.S. Apple Producers
Abstract Temperate tree fruit production is a significant agricultural sector in the United States, encompassing a variety of fruits like apples, pears, cherries, peaches and plums. The U.S. is the second-largest producer of apples in the world, after China. Annual U.S. production is 10 - 11 billion pounds of apple. However, apple production is complicated [...]
Building Generalist Robots with Agility via Learning and Control: Humanoids and Beyond
Abstract: Recent breathtaking advances in AI and robotics have brought us closer to building general-purpose robots in the real world, e.g., humanoids capable of performing a wide range of human tasks in complex environments. Two key challenges in realizing such general-purpose robots are: (1) achieving "breadth" in task/environment diversity, i.e., the generalist aspect, and (2) [...]
High-Fidelity Neural Radiance Fields
Abstract: I will present three recent projects that focus on high-fidelity neural radiance fields for walkable VR spaces: VR-NeRF (SIGGRAPH Asia 2023) is an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to [...]
Building Scalable Visual Intelligence: From Represention to Understanding and Generation
Abstract: In this talk, we will dive into our recent work on vision-centric generative AI, focusing on how it helps with understanding and creating visual content like images and videos. We'll cover the latest advances, including multimodal large language models for visual understanding and diffusion transformers for visual generation. We'll explore how these two areas [...]
Robots That Know When They Don’t Know
Abstract: Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that [...]
Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis
Abstract: This talk will present our approach for reconstructing objects from sparse-view images captured in unconstrained environments. In the absence of ground-truth camera poses, we will demonstrate how to utilize estimates from off-the-shelf systems and address two key challenges: refining noisy camera poses in sparse views and effectively handling outlier poses. Bio: Qitao is a second-year [...]
EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras
Abstract: In augmented and virtual reality (AR/VR) experiences, a user’s arms and hands can provide a convenient and tactile surface for touch input. Prior work has shown on-body input to have significant speed, accuracy, and ergonomic benefits over in-air interfaces, which are common today. In this work, we demonstrate high accuracy, bare hands (i.e., no special [...]
Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality
Abstract: Spatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as [...]
Abstraction Barriers for Embodied Algorithms
Abstract: Designing robotic systems to reliably modify their environment typically requires expert engineers and several design iterations. This talk will cover abstraction barriers that can be used to make the process of building such systems easier and the results more predictable. By focusing on approximate mathematical representations that model the process dynamics, these representations can [...]
Autonomous Robotic Surgery: Science Fiction or Reality?
Abstract: Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of [...]
Generative Modelling for 3D Multimodal Understanding of Human Physical Interactions
Abstract: Generative modelling has been extremely successful in synthesizing text, images, and videos. Can the same machinery also help us better understand how to physically interact with the multimodal 3D world? In this talk, I will introduce some of my group's work in answering this question. I will first discuss how we can enable 2D [...]
A retrospective, 40 Years of Field Robotics
Abstract: Chuck has been building and deploying robots in the field for the past 40 years. In this retrospective he will touch on the robots, people and experiences that have been part of the journey. From the early days in the 1980s with the Three Mile Island nuclear robots and the first outdoor autonomy robots [...]