VASC Seminar
Ehsan Adeli
Clinical Assistant Professor
Stanford University

Anticipating the Future: forecasting the dynamics in multiple levels of abstraction

Abstract: A key navigational capability for autonomous agents is to predict the future locations, actions, and behaviors of other agents in the environment. This is particularly crucial for safety in the realm of autonomous vehicles and robots. However, many current approaches to navigation and control assume perfect perception and knowledge of the environment, even though [...]

VASC Seminar
Xiaolong Wang
Assistant Professor
UCSD

Learning to Perceive Videos for Embodiment

Abstract: Video understanding has achieved tremendous success in computer vision tasks, such as action recognition, visual tracking, and visual representation learning. Recently, this success has gradually been converted into facilitating robots and embodied agents to interact with the environments. In this talk, I am going to introduce our recent efforts on extracting self-supervisory signals and [...]

VASC Seminar
Xavier Giro Nieto
Associate Professor
Universitat Politecnica de Catalunya

Open Challenges in Sign Language Translation & Production

Abstract: Machine translation and computer vision have greatly benefited of the advances in deep learning. The large and diverse amount of textual and visual data have been used to train neural networks whether in a supervised or self-supervised manner. Nevertheless, the convergence of the two field in sign language translation and production is still poses [...]

RI Seminar
Andrew E. Johnson
Principal Robotics Systems Engineer
NASA Jet Propulsion Laboratory, California Institute of Technology

The Search for Ancient Life on Mars Began with a Safe Landing

1305 Newell Simon Hall

Abstract: Prior mars rover missions have all landed in flat and smooth regions, but for the Mars 2020 mission, which is seeking signs of ancient life, this was no longer acceptable. To maximize the variety of rock samples that will eventually be returned to earth for analysis, the Perseverance rover needed to land in a [...]

VASC Seminar
Ishan Misra
Research Scientist
Facebook AI Research

3D Recognition with self-supervised learning and generic architectures

Abstract: Supervised learning relies on manual labeling which scales poorly with the number of tasks and data. Manual labeling is especially cumbersome for 3D recognition tasks such as detection and segmentation and thus most 3D datasets are surprisingly small compared to image or video datasets. 3D recognition methods are also fragmented based on the type [...]

VASC Seminar
Deepak Pathak
Assistant Professor
Carnegie Mellon University

Rapid Adaptation for Robot Learning

Abstract: How can we train a robot to generalize to diverse environments? This question underscores the holy grail of robot learning research because it is difficult to supervise an agent for all possible situations it can encounter in the future. We posit that the only way to guarantee such a generalization is to continually learn and [...]

RI Seminar
Systems Scientist
Robotics Institute,
Carnegie Mellon University

Robotic Cave Exploration for Search, Science, and Survey

1305 Newell Simon Hall

Abstract: Robotic cave exploration has the potential to create significant societal impact through facilitating search and rescue, in the fight against antibiotic resistance (science), and via mapping (survey). But many state-of-the-art approaches for active perception and autonomy in subterranean environments rely on disparate perceptual pipelines (e.g., pose estimation, occupancy modeling, hazard detection) that process the same underlying sensor data in [...]

VASC Seminar
Iasonas Kokkinos
Research Manager
Snap Inc, UCL

Humans, hands, and horses: 3D reconstruction of articulated object categories using strong, weak, and self-supervision

Abstract: Reconstructing 3D objects from a single 2D image is a task that humans perform effortlessly,  yet computer vision so far has only robustly solved 3D face reconstruction. In this talk we will see how we can extend the scope of monocular 3D reconstruction to more challenging, articulated categories such as human bodies, hands and [...]

RI Seminar
Thomas Howard
Assistant Professor of Electrical and Computer Engineering
Electrical & Computer Engineering, University of Rochester

Enabling Grounded Language Communication for Human-Robot Teaming

1305 Newell Simon Hall

Abstract:  The ability for robots to effectively understand natural language instructions and convey information about their observations and interactions with the physical world is highly dependent on the sophistication and fidelity of the robot’s representations of language, environment, and actions.  As we progress towards more intelligent systems that perform a wider range of tasks in a [...]

VASC Seminar
Alex Schwing
Assistant Professor
University of Illinois

Looking behind the Seen in Order to Anticipate

Abstract: Despite significant recent progress in computer vision and machine learning, personalized autonomous agents often still don’t participate robustly and safely across tasks in our environment. We think this is largely because they lack an ability to anticipate, which in turn is due to a missing understanding about what is happening behind the seen, i.e., [...]