RI Seminar
Allison Okamura
Richard W. Weiland Professor of Engineering
Department of Mechanical Engineering, Stanford University

Soft Wearable Haptic Devices for Ubiquitous Communication

1403 Tepper School Building

Abstract: Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmitted through touch is limited in large [...]

VASC Seminar
Noah Snavely
Professor & Research Scientist
Cornell Tech & Google DeepMind

Reconstructing Everything

3305 Newell-Simon Hall

Abstract: The presentation will be about a long-running, perhaps quixotic effort to reconstruct all of the world's structures in 3D from Internet photos, why this is challenging, and why this effort might be useful in the era of generative AI.   Bio: Noah Snavely is a Professor in the Computer Science Department at Cornell University [...]

Field Robotics Center Seminar
Srdjan Acimovic
Assistant Professor
School of Plant and Environmental Sciences, Virginia Tech

Using Robotics, Imaging and AI to Tackle Apple Fruit Production: Crop Harvest and Fire Blight Disease, The Two Major Bottlenecks for U.S. Apple Producers

CIC CIC Buuilding Conference Room 1, LL Level

Abstract Temperate tree fruit production is a significant agricultural sector in the United States, encompassing a variety of fruits like apples, pears, cherries, peaches and plums. The U.S. is the second-largest producer of apples in the world, after China. Annual U.S. production is 10 - 11 billion pounds of apple. However, apple production is complicated [...]

RI Seminar
Assistant Professor
Robotics Institute,
Carnegie Mellon University

Building Generalist Robots with Agility via Learning and Control: Humanoids and Beyond

1403 Tepper School Building

Abstract: Recent breathtaking advances in AI and robotics have brought us closer to building general-purpose robots in the real world, e.g., humanoids capable of performing a wide range of human tasks in complex environments. Two key challenges in realizing such general-purpose robots are: (1) achieving "breadth" in task/environment diversity, i.e., the generalist aspect, and (2) [...]

VASC Seminar
Christian Richardt
Research Scientist Lead
Meta Reality Labs Research

High-Fidelity Neural Radiance Fields

3305 Newell-Simon Hall

Abstract: I will present three recent projects that focus on high-fidelity neural radiance fields for walkable VR spaces: VR-NeRF (SIGGRAPH Asia 2023) is an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to [...]

VASC Seminar
Saining Xie
Assistant Professor
Courant Institute of Mathematical Sciences, New York University

Building Scalable Visual Intelligence: From Represention to Understanding and Generation

3305 Newell-Simon Hall

Abstract: In this talk, we will dive into our recent work on vision-centric generative AI, focusing on how it helps with understanding and creating visual content like images and videos. We'll cover the latest advances, including multimodal large language models for visual understanding and diffusion transformers for visual generation. We'll explore how these two areas [...]

RI Seminar
Anirudha Majumdar
Associate Professor
Mechanical and Aerospace Engineering, Princeton University

Robots That Know When They Don’t Know

1403 Tepper School Building

Abstract: Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that [...]

VASC Seminar
Qitao Zhao
Master's Student
Computer Vision, Carnegie Mellon University

Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis

3305 Newell-Simon Hall

Abstract:  This talk will present our approach for reconstructing objects from sparse-view images captured in unconstrained environments. In the absence of ground-truth camera poses, we will demonstrate how to utilize estimates from off-the-shelf systems and address two key challenges: refining noisy camera poses in sparse views and effectively handling outlier poses.   Bio:  Qitao is a second-year [...]

VASC Seminar
Vimal Mollyn
PhD Student
Human Computer Interaction Institute, Carnegie Mellon University

EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras

3305 Newell-Simon Hall

Abstract:  In augmented and virtual reality (AR/VR) experiences, a user’s arms and hands can provide a convenient and tactile surface for touch input. Prior work has shown on-body input to have significant speed, accuracy, and ergonomic benefits over in-air interfaces, which are common today. In this work, we demonstrate high accuracy, bare hands (i.e., no special [...]

VASC Seminar
Hyunsung Cho
Ph.D. Student
Human-Computer Interaction Institute (HCII) , Carnegie Mellon University

Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality

3305 Newell-Simon Hall

Abstract:  Spatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as [...]