A Modularized Approach to Vision-based Tactile Sensor Design Using Physics-based Rendering
Abstract: Touch is an essential sensing modality for making autonomous robots more dexterous and allowing them to work collaboratively with humans. In particular, the advent of vision-based tactile sensors has resulted in efforts to design them for different robotic manipulation tasks. However, this design task remains a challenging problem. This is for two reasons: first, [...]
Towards Universal Place Recognition
Title: Towards Universal Place Recognition Abstract: Place Recognition is essential for achieving robust robot localization. However, current state-of-art systems remain environment/domain-specific and fragile. By leveraging insights from vision foundation models, we present AnyLoc, a universal VPR solution that performs across diverse environments without retraining or fine-tuning, significantly outperforming supervised baselines. We further introduce MultiLoc, and enable [...]
Enhancing Model Performance and Interpretability with Causal Inference as a Feature Selection Algorithm
Abstract: Causal inference focuses on uncovering cause-effect relationships from data, diverging from conventional machine learning which primarily relies on correlation analysis. By identifying these causal relationships, causal inference improves feature selection for predictive models, leading to predictions that are more accurate, interpretable, and robust. This approach proves especially effective with interventional data, such as randomized [...]
ARPA-H and America’s Health: Pursuing High-Risk/High-Reward Research to Improve Health Outcomes for All
Dr. Andy Kilianski will provide an overview of ARPA-H, a new U.S. government funding agency pursuing R&D for health challenges. He will review the unique niche occupied by ARPA-H within the Department of Health and Human Services and how ARPA-H is already partnering with academia and industry to transform health outcomes across the country. Discussion [...]
Carnegie Mellon University
GNSS-denied Ground Vehicle Localization for Off-road Environments with Bird’s-eye-view Synthesis
Abstract: Global localization is essential for the smooth navigation of autonomous vehicles. To obtain accurate vehicle states, on-board localization systems typically rely on Global Navigation Satellite System (GNSS) modules for consistent and reliable global positioning. However, GNSS signals can be obstructed by natural or artificial barriers, leading to temporary system failures and degraded state estimation. On the [...]
Scaling up Robot Skill Learning with Generative Simulation
Abstract: Generalist robots need to learn a wide variety of skills to perform diverse tasks across multiple environments. Current robot training pipelines rely on humans to either provide kinesthetic demonstrations or program simulation environments with manually-designed reward functions for reinforcement learning. Such human involvement is an important bottleneck towards scaling up robot learning across diverse [...]
Simulation as a Tool for Conspicuity Measurement
Abstract: The use of unmanned aerial vehicles (UAVs) for time critical tasks is becoming increasingly popular. Operators are expected to use information from these swarms to make real-time and informed decisions. Consequently, detecting and recognizing targets from video is extremely pivotal to the success of these systems. At greater altitudes or with more vehicles, this [...]
VP4D: View Planning for 3D and 4D Scene Understanding
Abstract: View planning plays a critical role by gathering views that optimize scene reconstruction. Such reconstruction has played an important part in virtual production and computer animation, where a 3D map of the film set and motion capture of actors lead to an immersive experience. Current methods use uncertainty estimation in neural rendering of view [...]
Unlocking Generalization for Robotics via Modularity and Scale
Abstract: How can we build generalist robot systems? Looking at fields such as vision and language, the common theme has been large scale end-to-end learning with massive, curated datasets. In robotics, on the other hand, scale alone may not be enough due to the significant multimodality of robotics tasks, lack of easily accessible data and [...]
Automating Annotation Pipelines by leveraging Multi-Modal Data
Abstract: The era of vision-language models (VLMs) trained on large web-scale datasets challenges conventional formulations of “open-world" perception. In this work, we revisit the task of few-shot object detection (FSOD) in the context of recent foundational VLMs. First, we point out that zero-shot VLMs such as GroundingDINO significantly outperform state-of-the-art few-shot detectors (48 vs. 33 AP) [...]