VASC Seminar
Saining Xie
Assistant Professor
Courant Institute of Mathematical Sciences, New York University

Building Scalable Visual Intelligence: From Represention to Understanding and Generation

3305 Newell-Simon Hall

Abstract: In this talk, we will dive into our recent work on vision-centric generative AI, focusing on how it helps with understanding and creating visual content like images and videos. We'll cover the latest advances, including multimodal large language models for visual understanding and diffusion transformers for visual generation. We'll explore how these two areas [...]

RI Seminar
Anirudha Majumdar
Associate Professor
Mechanical and Aerospace Engineering, Princeton University

Robots That Know When They Don’t Know

1403 Tepper School Building

Abstract: Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that [...]

VASC Seminar
Qitao Zhao
Master's Student
Computer Vision, Carnegie Mellon University

Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis

3305 Newell-Simon Hall

Abstract:  This talk will present our approach for reconstructing objects from sparse-view images captured in unconstrained environments. In the absence of ground-truth camera poses, we will demonstrate how to utilize estimates from off-the-shelf systems and address two key challenges: refining noisy camera poses in sparse views and effectively handling outlier poses.   Bio:  Qitao is a second-year [...]

VASC Seminar
Vimal Mollyn
PhD Student
Human Computer Interaction Institute, Carnegie Mellon University

EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras

3305 Newell-Simon Hall

Abstract:  In augmented and virtual reality (AR/VR) experiences, a user’s arms and hands can provide a convenient and tactile surface for touch input. Prior work has shown on-body input to have significant speed, accuracy, and ergonomic benefits over in-air interfaces, which are common today. In this work, we demonstrate high accuracy, bare hands (i.e., no special [...]

VASC Seminar
Hyunsung Cho
Ph.D. Student
Human-Computer Interaction Institute (HCII) , Carnegie Mellon University

Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality

3305 Newell-Simon Hall

Abstract:  Spatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as [...]

RI Seminar
Nils Napp
Assistant Professor
Electrical and Computer Engineering, Cornell University

Abstraction Barriers for Embodied Algorithms

1403 Tepper School Building

Abstract: Designing robotic systems to reliably modify their environment typically requires expert engineers and several design iterations. This talk will cover abstraction barriers that can be used to make the process of building such systems easier and the results more predictable. By focusing on approximate mathematical representations that model the process dynamics, these representations can [...]

RI Seminar
Axel Krieger
Associate Professor
Department of Mechanical Engineering, Johns Hopkins Whiting School of Engineering

Autonomous Robotic Surgery: Science Fiction or Reality?

1403 Tepper School Building

Abstract:  Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of [...]

VASC Seminar
Srinath Sridhar
Assistant Professor
Computer Science, Brown University

Generative Modelling for 3D Multimodal Understanding of Human Physical Interactions

3305 Newell-Simon Hall

Abstract: Generative modelling has been extremely successful in synthesizing text, images, and videos. Can the same machinery also help us better understand how to physically interact with the multimodal 3D world? In this talk, I will introduce some of my group's work in answering this question. I will first discuss how we can enable 2D [...]

Field Robotics Center Seminar
Senior Field Robotics Specialist
Robotics Institute,
Carnegie Mellon University

A retrospective, 40 Years of Field Robotics

CIC CIC Buuilding Conference Room 1, LL Level

Abstract: Chuck has been building and deploying robots in the field for the past 40 years.  In this retrospective he will touch on the robots, people and experiences that have been part of the journey.  From the early days in the 1980s with the Three Mile Island nuclear robots and the first outdoor autonomy robots [...]

RI Seminar
Assistant Professor
Robotics Institute,
Carnegie Mellon University

Learning for Dynamic Robot Manipulation of Deformable and Transparent Objects

1403 Tepper School Building

Abstract: Dynamics, softness, deformability, and difficult-to-detect objects will be critical for new domains in robotic manipulation. But there are complications--including unmodelled dynamic effects, infinite-dimensional state spaces of deformable objects, and missing features from perception. This talk explores learning methods based on multi-view sensing, acoustics, physics-based regularizations, and Koopman operators and proposes a novel multi-finger soft [...]