A research team at Carnegie Mellon teamed up with Meta FAIR, the University of California, Berkeley, Technical University Dresden in Germany and the Centre for Tactile Internet with Human-in-the-Loop (CeTI) to propose NeuralFeels, a machine learning model that combines vision and touch sensing on a robot hand to reconstruct and track unseen objects.
Traditional sensing methods rely heavily on vision, but this requires expensive tracking setups, and the objects often get blocked from view during manipulation tasks, limiting what the robot can see. The researchers aimed to emulate human sensing by combining vision and touch on a robot hand to estimate the object’s shape and 3D position during a manipulation task.
Science Robotics featured NeuralFeels work on the cover of their November 2024 issue and published the project paper. CMU participants in this project include Sudharshan Suresh, a 2024 Ph.D. graduate from the Robotics Institute, and Associate Professor Michael Kaess.
The research is part of a broader collaboration between the CMU Robot Perception Lab and FAIR on multimodal perception for robotics. NeuralFeels builds on Suresh and Kaess’ work in tactile simultaneous location and mapping (SLAM) for robot arms and vision-based touch sensing. Their research integrates concepts from SLAM, neural rendering and tactile simulation to advance home robots’ ability to understand the world.
The team found that NeuralFeels achieved tracking errors of just a few millimeters and under heavy occlusion can show up to 94% improvements over vision-only methods. The advancements bring robots closer to safely and accurately handling tasks that require high dexterity without prior knowledge of the object.
For More Information: Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu