Modeling Coupled Human-Robot Motion for Provable Safety
Abstract: Guide robots that help users who are blind or low vision navigate through crowds and complex environments show promise for improving accessibility in public spaces. These robots must provide real-time safety guarantees for the users, which requires accurate modeling of their behavior in the context of closely coupled human-robot motion. This model must also [...]
MSR Thesis Talk – Mosam Dabhi
Title: Multi-view NRSfM: Affordable setup for high-fidelity 3D reconstruction Abstract: Triangulating a point in 3D space should only require two corresponding camera projections. However in practice, expensive multi-view setups -- involving tens sometimes hundreds of cameras -- are required to obtain the high fidelity 3D reconstructions necessary for many modern applications. In this talk, we argue [...]
Carnegie Mellon University
Robust Object Representations for Robot Manipulation
Abstract: As robots become more common in our daily lives, they will need to interact with many different environments and countless types of objects. While we, as humans, can easily understand an object after seeing it only once, this task is not trivial for robots. Researchers have, for the most part, been left with two [...]
Diminished Reality for Close Quarters Robotic Telemanipulation
Abstract: In robot telemanipulation tasks, the robot itself can sometimes occlude a target object from the user's view. We investigate the potential of diminished reality to address this problem. Our method uses an optical see-through head-mounted display to create a diminished reality illusion that the robot is transparent, allowing users to see occluded areas behind [...]
Carnegie Mellon University
Visual Representation and Recognition without Human Supervision
Abstract: Visual recognition models have seen great advancements by relying on large-scale, carefully curated datasets with human annotations. Most computer vision models leverage human supervision to either construct strong initial representations (e.g. using the ImageNet dataset) or for modeling the visual concepts relevant for downstream tasks (e.g. MS-COCO for object detection). In this thesis, we [...]
Learning Compositional Radiance Fields of Dynamic Human Heads
Meeting ID: 942 4671 0665 Passcode: jkhzoom Abstract: Photorealistic rendering of dynamic humans is an important capability for telepresence systems. Recently, neural rendering methods have been developed to create high-fidelity models of humans and objects. Some of these methods do not produce results with high-enough fidelity for driveable human models (Neural Volumes) whereas others have [...]
When and Why Does Contrastive Learning Work?
Abstract: Contrastive learning organizes data by pulling together related items and pushing apart everything else. These methods have become very popular but it's still not entirely clear when and why they work. I will share two ideas from our recent work. First, I will argue that contrastive learning is really about learning to forget. Different [...]
Carnegie Mellon University
Heuristic Search Based Planning by Minimizing Anticipated Search Efforts
Abstract: Robot planning problems in dynamic environments, such as navigation among pedestrians, driving at high-speed on densely populated roads, and manipulation for collaborative tasks alongside humans, necessitate efficient planning. Bounded-suboptimal heuristic search algorithms are a popular alternative to optimal heuristic search algorithms that compromise solution quality for computation speed. Specifically, these searches aim to find [...]
Carnegie Mellon University
Liquid Metal Actuators
Abstract: Bioinspired robotic actuators arise from the advances in soft materials and activation methods to achieve desired performance. Because of their intrinsic compliance, actuators built from soft materials and liquids can achieve elastic resilience and adaptability similar to their biological counterparts. Liquid metals provide great opportunities for creating an artificial muscle that generates forces at [...]