VASC Seminar
Shengjie Zhu
Ph.D. Student
Michigan State University

Structure-from-Motion Meets Self-supervised Learning

Newell-Simon Hall 3305

Abstract: How to teach machine to perceive 3D world from unlabeled videos? We will present new solution via incorporating Structure-from-Motion (SfM) into self-supervised model learning. Given RGB inputs, deep models learn to regress depth and correspondence. With the two inputs, we introduce a camera localization algorithm that searches for certified global optimal poses. However, the [...]

VASC Seminar
Qi Sun
Assistant Professor
New York University

Toward Human-Centered XR: Bridging Cognition and Computation

Newell-Simon Hall 3305

Abstract:   Virtual and Augmented Reality enables unprecedented possibilities for displaying virtual content, sensing physical surroundings, and tracking human behaviors with high fidelity. However, we still haven't created "superhumans" who can outperform what we are in physical reality, nor a "perfect" XR system that delivers infinite battery life or realistic sensation. In this talk, I will discuss some of our [...]

VASC Seminar
Yanxi Liu
Professor
Penn State University

Zeros for Data Science

Newell-Simon Hall 3305

Abstract: The world around us is neither totally regular nor completely random. Our and robots’ reliance on spatiotemporal patterns in daily life cannot be over-stressed, given the fact that most of us can function (perceive, recognize, navigate) effectively in chaotic and previously unseen physical, social and digital worlds. Data science has been promoted and practiced [...]

VASC Seminar
Agata Lapedriza
Principal Research Scientist/Professor
Northeastern University

Emotion perception: progress, challenges, and use cases

Newell-Simon Hall 3305

Abstract: One of the challenges Human-Centric AI systems face is understanding human behavior and emotions considering the context in which they take place. For example, current computer vision approaches for recognizing human emotions usually focus on facial movements and often ignore the context in which the facial movements take place. In this presentation, I will [...]

VASC Seminar
Yunzhu Li
Assistant Professor
University of Illinois Urbana-Champaign

Foundation Models for Robotic Manipulation: Opportunities and Challenges

Newell-Simon Hall 3305

Abstract: Foundation models, such as GPT-4 Vision, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and [...]

VASC Seminar
Luca Weihs
Research Manager
Allen Institute for AI

Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World

Newell-Simon Hall 3305

Abstract: We show that imitating shortest-path planners in simulation produces Stretch RE-1 robotic agents that, given language instructions, can proficiently navigate, explore, and manipulate objects in both simulation and in the real world using only RGB sensors (no depth maps or GPS coordinates). This surprising result is enabled by our end-to-end, transformer-based, SPOC architecture, powerful [...]

VASC Seminar
Vishnu Lokhande
Assistant Professor
University at Buffalo, SUNY

Creating robust deep learning models involves effectively managing nuisance variables

Newell-Simon Hall 3305

Abstract: Over the past decade, we have witnessed significant advances in capabilities of deep neural network models in vision and machine learning. However, issues related to bias, discrimination, and fairness in general, have received a great deal of negative attention (e.g., mistakes in surveillance and animal-human confusion of vision models). But bias in AI models [...]

VASC Seminar
Mohit Gupta
Associate Professor
University of Wisconsin-Madison

Shedding Light on 3D Cameras

Newell-Simon Hall 3305

Abstract: The advent (and commoditization) of low-cost 3D cameras is revolutionizing many application domains, including robotics, autonomous navigation, human computer interfaces, and recently even consumer devices such as cell-phones. Most modern 3D cameras (e.g., LiDAR) are active; they consist of a light source that emits coded light into the scene, i.e., its intensity is modulated over [...]

VASC Seminar
Ilya Chugunov
PhD Candidate
Computational Imaging Lab, Princeton University

Neural Field Representations of Mobile Computational Photography

Newell-Simon Hall 3305

Abstract: Burst imaging pipelines allow cellphones to compensate for less-than-ideal optical and sensor hardware by computationally merging multiple lower-quality images into a single high-quality output. The main challenge for these pipelines is compensating for pixel motion, estimating how to align and merge measurements across time while the user's natural hand tremor involuntarily shakes the camera. In [...]

VASC Seminar
Mian Wei
PhD Candidate
University of Toronto

Passive Ultra-Wideband Single-Photon Imaging

3305 Newell-Simon Hall

Abstract: High-speed light sources, fast cameras, and depth sensors have made it possible to image dynamic phenomena occurring in ever smaller time intervals with the help of actively-controlled light sources and synchronization. Unfortunately, while these techniques do capture ultrafast events, they cannot simultaneously capture slower ones too. I will discuss our recent work on passive ultra-wideband [...]