Driving-signal aware full-body avatars
Abstract
We present a learning-based method for building driving-signal aware full-body avatars. Our model is a conditional variational autoencoder that can be animated with incomplete driving signals, such as human pose and facial keypoints, and produces a high-quality representation of human geometry and view-dependent appearance. The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation. To this end, we explicitly account for information deficiency in the driving signal by introducing a latent space that exclusively captures the remaining information, thus enabling the imputation of the missing factors required during full-body animation, while remaining faithful to the driving signal. We also propose a learnable localized compression for the driving signal which promotes better generalization, and helps minimize the influence of global chance-correlations often found in real datasets. For a given driving signal, the resulting variational model produces a compact space of uncertainty for missing factors that allows for an imputation strategy best suited to a particular application. We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence with driving signals acquired from minimal sensors placed in the environment and mounted on a VR-headset.
BibTeX
@article{Bagautdinov-2021-128860,author = {Timur Bagautdinov and Chenglei Wu and Tomas Simon and Fabian Prada and Takaaki Shiratori and Shih-En Wei and Weipeng Xu and Yaser Sheikh and Jason Saragih},
title = {Driving-signal aware full-body avatars},
journal = {ACM Transactions on Graphics (TOG)},
year = {2021},
month = {August},
volume = {40},
number = {4},
}