VASC Seminar
Sanja Fidler
Associate Professor
Department of Computer Science, University of Toronto

Towards AI for 3D Content Creation

Abstract: 3D content is key in several domains such as architecture, film, gaming, and robotics. However, creating 3D content can be very time consuming -- the artists need to sculpt high quality 3d assets, compose them into large worlds, and bring these worlds to life by writing behaviour models that "drives" the characters around in [...]

VASC Seminar
Farah Deeba
PhD Candidate
Electrical and Computer Engineering Department , University of British Columbia

Understanding the Placenta: Towards an Objective Pregnancy Screening

Abstract: My research focusses on the development of a pregnancy screening tool, that will be: (i) system and user-independent; and (ii) provides a quantifi able measure of placental health. With this end, I am working towards the design of a multiparametric quantitative ultrasound (QUS) based placental tissue characterization method. The method would potentially identify the [...]

VASC Seminar
Jiachen Li
Ph.D. Candidate
University of California, Berkeley

Relational Reasoning for Multi-Agent Systems

Abstract: Multi-agent interacting systems are prevalent in the world, from purely physical systems to complicated social dynamics systems. The interactions between entities / components can give rise to very complex behavior patterns at the level of both individuals and the whole system. In many real-world multi-agent interacting systems (e.g., traffic participants, mobile robots, sports players), [...]

VASC Seminar
Hamed Pirsiavash
Assistant Professor
University of Maryland Baltimore County

Self-supervised learning for visual recognition

Abstract: We are interested in learning visual representations that are discriminative for semantic image understanding tasks such as object classification, detection, and segmentation in images/videos. A common approach to obtain such features is to use supervised learning. However, this requires manual annotation of images, which is costly, ambiguous, and prone to errors. In contrast, self-supervised [...]

VASC Seminar
Ronghang Hu
Research Scientist
Facebook Inc.

Reasoning over Text in Images for VQA and Captioning

Abstract: Text in images carries essential information for multimodal reasoning, such as VQA or image captioning. To enable machines to perceive and understand scene text and reason jointly with other modalities, 1) we collect the TextCaps dataset, which requires models to read and reason over text and visual content in the image to generate image [...]

VASC Seminar
Jhony Kaesemodel Pontes
Research Scientist
Argo AI

Point Cloud Registration with or without Learning

Abstract: I will be presenting two of our recent works on 3D point cloud registration:   A scene flow method for non-rigid registration: I will discuss our current method to recover scene flow from point clouds. Scene flow is the three-dimensional (3D) motion field of a scene, and it provides information about the spatial arrangement [...]

VASC Seminar
Arsalan Mousavian
Senior Robotics Research Scientist
NVIDIA

Propelling Robot Manipulation of Unknown Objects using Learned Object Centric Models

Abstract: There is a growing interest in using data-driven methods to scale up manipulation capabilities of robots for handling a large variety of objects. Many of these methods are oblivious to the notion of objects and they learn monolithic policies from the whole scene in image space. As a result, they don’t generalize well to [...]

VASC Seminar
Phillip Isola
Assistant Professor
EECS, MIT

When and Why Does Contrastive Learning Work?

Abstract: Contrastive learning organizes data by pulling together related items and pushing apart everything else. These methods have become very popular but it's still not entirely clear when and why they work. I will share two ideas from our recent work. First, I will argue that contrastive learning is really about learning to forget. Different [...]

VASC Seminar
Ehsan Adeli
Clinical Assistant Professor
Stanford University

Anticipating the Future: forecasting the dynamics in multiple levels of abstraction

Abstract: A key navigational capability for autonomous agents is to predict the future locations, actions, and behaviors of other agents in the environment. This is particularly crucial for safety in the realm of autonomous vehicles and robots. However, many current approaches to navigation and control assume perfect perception and knowledge of the environment, even though [...]

VASC Seminar
Xiaolong Wang
Assistant Professor
UCSD

Learning to Perceive Videos for Embodiment

Abstract: Video understanding has achieved tremendous success in computer vision tasks, such as action recognition, visual tracking, and visual representation learning. Recently, this success has gradually been converted into facilitating robots and embodied agents to interact with the environments. In this talk, I am going to introduce our recent efforts on extracting self-supervisory signals and [...]