VASC Seminar
Nataniel Ruiz
Research Scientist
Google

Unlocking Magic: Personalization of Diffusion Models for Novel Applications

3305 Newell-Simon Hall

Abstract: Since the recent advent of text-to-image diffusion models for high-quality realistic image generation, a plethora of creative applications have suddenly become within reach. I will present my work at Google where I have attempted to unlock magical applications by proposing simple techniques that act on these large text-to-image diffusion models. Particularly, a large class of [...]

VASC Seminar
Yingsi Qin
PhD Candidate
Carnegie Mellon University

Instant Visual 3D Worlds Through Split-Lohmann Displays

3305 Newell-Simon Hall

Abstract: Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming [...]

VASC Seminar
Edward Lu
PhD student
ECE Department at CMU

Remote Rendering and 3D Streaming for Resource-Constrained XR Devices

3305 Newell-Simon Hall

Abstract: An overview of the motivation and challenges for remote rendering and real-time 3D video streaming on XR headsets. Bio: Edward is a third year PhD student in the ECE department interested in computer systems for VR/AR devices. Homepage: https://users.ece.cmu.edu/~elu2/   Sponsored in part by:   Meta Reality Labs Pittsburgh      

VASC Seminar
Mosam Dabhi
PhD Student
Carnegie Mellon University

Vectorizing Raster Signals for Spatial Intelligence

3305 Newell-Simon Hall

Abstract: This seminar will focus on how vectorized representations can be generated from raster signals to enhance spatial intelligence. I will discuss the core methodology behind this transformation, with a focus on applications in AR/VR and robotics. The seminar will also briefly cover follow-up work that explores rigging and re-animating objects from casual single videos [...]

VASC Seminar
Bailey Miller
PhD Candidate
Carnegie Mellon University

Stochastic Graphics Primitives

3305 Newell-Simon Hall

Abstract: For decades computer graphics has successfully leveraged stochasticity to enable both expressive volumetric representations of participating media like clouds and efficient Monte Carlo rendering of large scale, complex scenes. In this talk, we’ll explore how these complementary forms of stochasticity (representational and algorithmic) may be applied more generally across computer graphics and vision. In [...]

VASC Seminar
Noah Snavely
Professor & Research Scientist
Cornell Tech & Google DeepMind

Reconstructing Everything

3305 Newell-Simon Hall

Abstract: The presentation will be about a long-running, perhaps quixotic effort to reconstruct all of the world's structures in 3D from Internet photos, why this is challenging, and why this effort might be useful in the era of generative AI.   Bio: Noah Snavely is a Professor in the Computer Science Department at Cornell University [...]

VASC Seminar
Christian Richardt
Research Scientist Lead
Meta Reality Labs Research

High-Fidelity Neural Radiance Fields

3305 Newell-Simon Hall

Abstract: I will present three recent projects that focus on high-fidelity neural radiance fields for walkable VR spaces: VR-NeRF (SIGGRAPH Asia 2023) is an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to [...]

VASC Seminar
Saining Xie
Assistant Professor
Courant Institute of Mathematical Sciences, New York University

Building Scalable Visual Intelligence: From Represention to Understanding and Generation

3305 Newell-Simon Hall

Abstract: In this talk, we will dive into our recent work on vision-centric generative AI, focusing on how it helps with understanding and creating visual content like images and videos. We'll cover the latest advances, including multimodal large language models for visual understanding and diffusion transformers for visual generation. We'll explore how these two areas [...]

VASC Seminar
Qitao Zhao
Master's Student
Computer Vision, Carnegie Mellon University

Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis

3305 Newell-Simon Hall

Abstract:  This talk will present our approach for reconstructing objects from sparse-view images captured in unconstrained environments. In the absence of ground-truth camera poses, we will demonstrate how to utilize estimates from off-the-shelf systems and address two key challenges: refining noisy camera poses in sparse views and effectively handling outlier poses.   Bio:  Qitao is a second-year [...]

VASC Seminar
Vimal Mollyn
PhD Student
Human Computer Interaction Institute, Carnegie Mellon University

EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras

3305 Newell-Simon Hall

Abstract:  In augmented and virtual reality (AR/VR) experiences, a user’s arms and hands can provide a convenient and tactile surface for touch input. Prior work has shown on-body input to have significant speed, accuracy, and ergonomic benefits over in-air interfaces, which are common today. In this work, we demonstrate high accuracy, bare hands (i.e., no special [...]

VASC Seminar
Hyunsung Cho
Ph.D. Student
Human-Computer Interaction Institute (HCII) , Carnegie Mellon University

Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality

3305 Newell-Simon Hall

Abstract:  Spatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as [...]

VASC Seminar
Srinath Sridhar
Assistant Professor
Computer Science, Brown University

Generative Modelling for 3D Multimodal Understanding of Human Physical Interactions

3305 Newell-Simon Hall

Abstract: Generative modelling has been extremely successful in synthesizing text, images, and videos. Can the same machinery also help us better understand how to physically interact with the multimodal 3D world? In this talk, I will introduce some of my group's work in answering this question. I will first discuss how we can enable 2D [...]