Faculty Candidate
Multimodal Computational Behavior Understanding
Emotions influence our lives. Observational methods of measuring affective behavior have yielded critical insights, but a persistent barrier to their wide application is that they are labor-intensive to learn and to use. An automated system that can quantify and synthesize human affective behavior in real-world environments would be a transformational tool for research and for [...]
Faster, Safer, Smaller: The future of autonomy needs all three
Abstract In this talk I will start with state estimation as my PhD work. Very often, state estimation plays a crucial role in a robotic system serving as a building block for autonomy. Challenges are to carry out state estimation in 6-DOF, in real-time at high frequencies, with high precision, robust to aggressive motion and [...]
Automatic Human Behavior Analysis and Recognition for Research and Clinical Use
Nonverbal behavior is multimodal and interpersonal. In several studies, I addressed the dynamics of facial expression and head movement for emotion communication, social interaction, and clinical applications. By modeling multimodal and interpersonal communication my work seeks to inform affective computing and behavioral health informatics. In this talk, I will address some of my recent work [...]
Service Robots for All
Robots have the unique potential to help people, especially people with disabilities, in their daily lives. However, providing continuous physical and social support in human environments requires new algorithmic approaches that are fast, adaptable, robust to real-world noise, and can handle unconstrained behavior from diverse users. This talk will describe my work developing and studying [...]
Carnegie Mellon University
Towards Generalization and Efficiency in Reinforcement Learning
Abstract: In classic supervised machine learning, a learning agent behaves as a passive observer: it receives examples from some external environment which it has no control over and then makes predictions. Reinforcement Learning (RL), on the other hand, is fundamentally interactive : an autonomous agent must learn how to behave in an unknown and possibly [...]
Resilient Safety Assurance for Human-Centered Autonomous Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must be able to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like [...]
Carnegie Mellon University
Rethinking the Relationship between Data and Robotics.
Abstract: While robotics has made tremendous progress over the last few decades, most success stories are still limited to carefully engineered and precisely modeled environments. Interestingly, one of the most significant successes in the last decade of AI has been the use of Machine Learning (ML) to generalize and robustly handle diverse situations. So why [...]
Faculty Candidate: Yuke Zhu
Talk: Closing the perception-action loop Abstract: Robots and autonomous systems have been playing a significant role in the modern economy. Custom-built robots have remarkably improved productivity, operational safety, and product quality. However, these robots are usually programmed for specific tasks in well-controlled environments, unable to perform diverse tasks in the real world. In this talk, I will [...]
Self-Directed Learning
Abstract: Generalization, i.e., the ability to adapt to novel scenarios, is the hallmark of human intelligence. While we have systems that excel at recognizing objects, cleaning floors, playing complex games and occasionally beating humans, they are incredibly specific in that they only perform the tasks they are trained for and are miserable at generalization. In [...]
Learning to see the physical world
Abstract: Human intelligence is beyond pattern recognition. From a single image, we're able to explain what we see, reconstruct the scene in 3D, predict what's going to happen, and plan our actions accordingly. In this talk, I will present our recent work on physical scene understanding---building versatile, data-efficient, and generalizable machines that learn to see, reason about, and interact [...]