Model-Centric Verification of Artificial Intelligence
Abstract: This work shows how provable guarantees can be used to supplement probabilistic estimates in the context of Artificial Intelligence (AI) systems. Statistical techniques measure the expected performance of a model, but low error rates say nothing about the ways in which errors manifest. Formal verification of model adherence to design specifications can yield certificates [...]
Designing Whisker Sensors to Detect Multiple Mechanical Stimuli for Robotic Applications
Abstract: Many mammals, such as rats and seals, use their whiskers as versatile mechanical sensors to gain precise information about their surroundings. Whisker-inspired sensors on robotic platforms have shown their potential benefit, improving applications ranging from drone navigation to texture mapping. Despite this, there is a gap between the engineered sensors and many of the [...]
Carnegie Mellon University
Human-in-the-loop Control of Mobile Robots
Abstract: Human-in-the-loop control for mobile robots is an important aspect of robot operation, especially for navigation in unstructured environments or in the case of unexpected events. However, traditional paradigms of human-in-the-loop control have relied heavily on the human to provide precise and accurate control inputs to the robot, or reduced the role of the human [...]
Visual Understanding across Semantic Groups, Domains and Devices
Abstract: Deep neural networks often lack generalization capabilities to accommodate changes in the input/output domain distributions and, therefore, are inherently limited by the restricted visual and semantic information contained in the original training set. In this talk, we argue the importance of the versatility of deep neural architectures and we explore it from various perspectives. [...]
Towards Robust Human-Robot Interaction: A Quality Diversity Approach
Abstract: The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring the diverse scenarios of interaction between humans and robots in simulation can improve understanding of complex human-robot interaction systems and avoid potentially costly failures in real-world settings. [...]
Carnegie Mellon University
Planning and Execution using Inaccurate Models with Provable Guarantees on Task Completeness
Abstract: Modern planning methods are effective in computing feasible and optimal plans for robotic tasks when given access to accurate dynamical models. However, robots operating in the real world often face situations that cannot be modeled perfectly before execution. Thus, we only have access to simplified but potentially inaccurate models. This imperfect modeling can lead [...]
Topology-Driven Learning for Biomedical Imaging Informatics
Abstract: Thanks to decades of technology development, we are now able to visualize in high quality complex biomedical structures such as neurons, vessels, trabeculae and breast tissues. We need innovative methods to fully exploit these structures, which encode important information about underlying biological mechanisms. In this talk, we explain how topology, i.e., connected components, handles, loops, [...]
Lessons from the Field: Deep Learning and Machine Perception for field robots
Abstract: Mobile robots now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded data sets and the field. This talk will present recent work addressing the application of machine learning techniques to mobile robotic perception. [...]
Learning generative representations for image distributions
Abstract: Autoencoder neural networks are an unsupervised technique for learning representations, which have been used effectively in many data domains. While capable of generating data, autoencoders have been inferior to other models like Generative Adversarial Networks (GAN’s) in their ability to generate image data. We will describe a general autoencoder architecture that addresses this limitation, and [...]
Carnegie Mellon University
Self-Supervising Occlusions for Vision
Abstract: Virtually every scene has occlusions. Even a scene with a single object exhibits self-occlusions - a camera can only view one side of an object (left or right, front or back), or part of the object is outside the field of view. More complex occlusions occur when one or more objects block part(s) of [...]