Faculty Candidate
Multimodal Computational Behavior Understanding
Emotions influence our lives. Observational methods of measuring affective behavior have yielded critical insights, but a persistent barrier to their wide application is that they are labor-intensive to learn and to use. An automated system that can quantify and synthesize human affective behavior in real-world environments would be a transformational tool for research and for [...]
Faster, Safer, Smaller: The future of autonomy needs all three
Abstract In this talk I will start with state estimation as my PhD work. Very often, state estimation plays a crucial role in a robotic system serving as a building block for autonomy. Challenges are to carry out state estimation in 6-DOF, in real-time at high frequencies, with high precision, robust to aggressive motion and [...]
Automatic Human Behavior Analysis and Recognition for Research and Clinical Use
Nonverbal behavior is multimodal and interpersonal. In several studies, I addressed the dynamics of facial expression and head movement for emotion communication, social interaction, and clinical applications. By modeling multimodal and interpersonal communication my work seeks to inform affective computing and behavioral health informatics. In this talk, I will address some of my recent work [...]
Service Robots for All
Robots have the unique potential to help people, especially people with disabilities, in their daily lives. However, providing continuous physical and social support in human environments requires new algorithmic approaches that are fast, adaptable, robust to real-world noise, and can handle unconstrained behavior from diverse users. This talk will describe my work developing and studying [...]
Carnegie Mellon University
Towards Generalization and Efficiency in Reinforcement Learning
Abstract: In classic supervised machine learning, a learning agent behaves as a passive observer: it receives examples from some external environment which it has no control over and then makes predictions. Reinforcement Learning (RL), on the other hand, is fundamentally interactive : an autonomous agent must learn how to behave in an unknown and possibly [...]
Resilient Safety Assurance for Human-Centered Autonomous Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must be able to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like [...]
Carnegie Mellon University
Rethinking the Relationship between Data and Robotics.
Abstract: While robotics has made tremendous progress over the last few decades, most success stories are still limited to carefully engineered and precisely modeled environments. Interestingly, one of the most significant successes in the last decade of AI has been the use of Machine Learning (ML) to generalize and robustly handle diverse situations. So why [...]
Faculty Candidate: Yuke Zhu
Talk: Closing the perception-action loop Abstract: Robots and autonomous systems have been playing a significant role in the modern economy. Custom-built robots have remarkably improved productivity, operational safety, and product quality. However, these robots are usually programmed for specific tasks in well-controlled environments, unable to perform diverse tasks in the real world. In this talk, I will [...]
Self-Directed Learning
Abstract: Generalization, i.e., the ability to adapt to novel scenarios, is the hallmark of human intelligence. While we have systems that excel at recognizing objects, cleaning floors, playing complex games and occasionally beating humans, they are incredibly specific in that they only perform the tasks they are trained for and are miserable at generalization. In [...]
Learning to see the physical world
Abstract: Human intelligence is beyond pattern recognition. From a single image, we're able to explain what we see, reconstruct the scene in 3D, predict what's going to happen, and plan our actions accordingly. In this talk, I will present our recent work on physical scene understanding---building versatile, data-efficient, and generalizable machines that learn to see, reason about, and interact [...]
Learning to Synthesize Images
Abstract: People are avid consumers of visual content. Every day, we watch videos, play games, and share photos on social media. However, there is an asymmetry – while everybody is able to consume visual content, only a chosen few (e.g., painters, sculptors, film directors) are talented enough to express themselves visually. For example, in modern [...]
Faculty Candidate: Angjoo Kanazawa
Title: Perceiving Humans in the 3D World Abstract: Since the dawn of civilization, we have functioned in a social environment where we spend our days interacting with other humans. As we approach a society where intelligent systems and humans coexist, these systems must also interpret and interact with humans that reside in the 3D world. [...]
Augmenting Imagination: Capturing, Modeling, and Exploring the World Through Video
Abstract: Cameras offer a rich and ubiquitous source of data about the world around us, providing many opportunities to explore new computational approaches to real-world problems. In this talk, I will show how insights from art, science, and engineering can help us connect progress in visual computing with typically non-visual problems in other domains, allowing [...]
AI-Driven Videos Synthesis and its Implications
Abstract: In this talk, I will present my research vision in how to create photo-realistic digital replica of the real world, and how to make holograms become a reality. Eventually, I would like to see photos and videos evolve to become interactive, holographic content indistinguishable from the real world. Imagine taking such 3D photos to [...]
Understanding 3D Scans
Abstract: With recent developments in both commodity range sensors as well as mixed reality devices, capturing and creating 3D models of the world around us has become increasingly important. As the world around us lives in a three-dimensional space, such 3D models will not only facilitate capture and display for content creation but also provide [...]
Faculty Candidate: Wenshan Wang
Title: Towards General Autonomy: Learning from Simulation, Interaction, and Demonstration Abstract: Today's autonomous systems are still brittle in challenging environments or rely on designers to anticipate all possible scenarios to respond appropriately. On the other hand, leveraging machine learning techniques, robot systems are trained in simulation or the real world for various tasks. Due to [...]