VR facial animation via multiview image translation
Abstract
A key promise of Virtual Reality (VR) is the possibility of remote social interaction that is more immersive than any prior telecommunication media. However, existing social VR experiences are mediated by inauthentic digital representations of the user (i.e., stylized avatars). These stylized representations have limited the adoption of social VR applications in precisely those cases where immersion is most necessary (e.g., professional interactions and intimate conversations). In this work, we present a bidirectional system that can animate avatar heads of both users' full likeness using consumer-friendly headset mounted cameras (HMC). There are two main challenges in doing this: unaccommodating camera views and the image-to-avatar domain gap. We address both challenges by leveraging constraints imposed by multiview geometry to establish precise image-to-avatar correspondence, which are then used to learn an end-to-end model for real-time tracking. We present designs for a training HMC, aimed at data-collection and model building, and a tracking HMC for use during interactions in VR. Correspondence between the avatar and the HMC-acquired images are automatically found through self-supervised multiview image translation, which does not require manual annotation or one-to-one correspondence between domains. We evaluate the system on a variety of users and demonstrate significant improvements over prior work.
BibTeX
@article{Wei-2019-122160,author = {Shih-En Wei and Jason Saragih and Tomas Simon and Adam W. Harley and Stephen Lombardi and Michal Perdoch and Alexander Hypes and Dawei Wang and Hernan Badino and Yaser Sheikh},
title = {VR facial animation via multiview image translation},
journal = {ACM Transactions on Graphics (TOG)},
year = {2019},
month = {July},
volume = {38},
number = {4},
}