VASC Seminar
Adriana Kovashka
Associate Professor in Computer Science
University of Pittsburgh

Weak Multi-modal Supervision for Object Detection and Persuasive Media

Newell-Simon Hall 3305

Abstract:  The diversity of visual content available on the web presents new challenges and opportunities for computer vision models. In this talk, I present our work on learning object detection models from potentially noisy multi-modal data, retrieving complementary content across modalities, transferring reasoning models across dataset boundaries, and recognizing objects in non-photorealistic media.  While the [...]

VASC Seminar
Andrew Owens
Assistant Professor
Electrical Engineering & Computer Science , University of Michigan

Learning Visual, Audio, and Cross-Modal Correspondences

Newell-Simon Hall 3305

Abstract:  Today's machine perception systems rely heavily on supervision provided by humans, such as labels and natural language. I will talk about our efforts to make systems that, instead, learn from two ubiquitous sources of unlabeled data: visual motion and cross-modal sensory associations. I will begin by discussing our work on creating unified models for [...]

VASC Seminar
Lachlan MacDonald
Postdoc
Australian Institute for Machine Learning, University of Adelaide

Towards a formal theory of deep optimisation

Newell-Simon Hall 3305

Abstract:  Precise understanding of the training of deep neural networks is largely restricted to architectures such as MLPs and cost functions such as the square cost, which is insufficient to cover many practical settings.  In this talk, I will argue for the necessity of a formal theory of deep optimisation.  I will describe such a [...]

VASC Seminar
Christoph Lassner
Senior Research Scientist
Epic Games

Towards Interactive Radiance Fields

Newell-Simon Hall 3305

Abstract:  Over the last years, the fields of computer vision and computer graphics have increasingly converged. Using the exact same processes to model appearance during 3D reconstruction and rendering has shown tremendous benefits, especially when combined with machine learning techniques to model otherwise hard-to-capture or -simulate optical effects. In this talk, I will give an [...]

VASC Seminar
Rika Antonova
Postdoctoral Scholar
Stanford University

Enabling Self-sufficient Robot Learning

3305 Newell-Simon Hall

Abstract:  Autonomous exploration and data-efficient learning are important ingredients for helping machine learning handle the complexity and variety of real-world interactions. In this talk, I will describe methods that provide these ingredients and serve as building blocks for enabling self-sufficient robot learning. First, I will outline a family of methods that facilitate active global exploration. [...]

VASC Seminar
Vasudevan (Vasu) Sundarababu
SVP & Head of Digital Engineering
Centific

How Computer Vision Helps – from Research to Scale

3305 Newell-Simon Hall

Abstract:  Vasudevan (Vasu) Sundarababu, SVP and Head of Digital Engineering, will cover the topic: ‘How Computer Vision Helps – from Research to Scale’. During his time, Vasu will explore how Computer Vision technology can be leveraged in-market today, the key projects he is currently leading that leverage CV, and the end-to-end lifecycle of a CV initiative - [...]

VASC Seminar
Rachel McDonnell
Associate Professor
Creative Technologies, Trinity College Dublin, Ireland

Motion Matters in the Metaverse

3305 Newell-Simon Hall

Abstract:  Abstract: In the early 1970s, Psychologists investigated biological motion perception by attaching point-lights to the joints of the human body, known as ‘point light walkers’. These early experiments showed biological motion perception to be an extreme example of sophisticated pattern analysis in the brain, capable of easily differentiating human motions with reduced motion cues. Further [...]

VASC Seminar
Anand Bhattad
PhD candidate
University of Illinois Urbana-Champaign

What do generative models know about geometry and illumination?

3305 Newell-Simon Hall

Abstract: Generative models can produce compelling pictures of realistic scenes. Objects are in sensible places, surfaces have rich textures, illumination effects appear accurate, and the models are controllable. These models, such as StyleGAN, can also generate semantically meaningful edits of scenes by modifying internal parameters. But do these models manipulate a purely abstract representation of the [...]

VASC Seminar
Saurabh Gupta
Assistant Professor
University of Illinois Urbana-Champaign

Robot Learning by Understanding Egocentric Videos

GHC 8102

Abstract: True gains of machine learning in AI sub-fields such as computer vision and natural language processing have come about from the use of large-scale diverse datasets for learning. In this talk, I will discuss if and how we can leverage large-scale diverse data in the form of egocentric videos (first-person videos of humans conducting [...]