Controllable Visual-Tactile Synthesis - Robotics Institute Carnegie Mellon University
Loading Events

PhD Speaking Qualifier

March

17
Fri
Ruihan Gao PhD Student Robotics Institute,
Carnegie Mellon University
Friday, March 17
10:00 am to 11:00 am
GHC 6501
Controllable Visual-Tactile Synthesis
Abstract:

Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. The main challenges for multi-modal synthesis lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device.

In my talk, I present our approach of leveraging deep generative models to create a multi-sensory experience where users can see and touch the synthesized object when sliding their fingers on a haptic surface. We collect high-resolution tactile data with a GelSight sensor and create a new visuotactile clothing dataset. We then develop a conditional generative model that synthesizes both visual and tactile outputs from a single sketch. We also introduce a pipeline to render high-quality visual and tactile outputs on an electroadhesion-based haptic device for an immersive experience, allowing for challenging materials and editable sketch inputs.

The rendering device will be provided for a demo after the presentation. Everyone is welcome to try it out!

Committee:
Jun-Yan Zhu
Wenzhen Yuan
Roberta Klatzky
Vivian Shen