Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling
Abstract
We present a data-driven method for learning hair models that enables the creation and animation of many interactive virtual characters in real-time (for gaming, character pre-visualization and design). Our model has a number of properties that make it appealing for interactive applications:(i) it preserves the key dynamic properties of physical simulation at a fraction of the computational cost,(ii) it gives the user continuous interactive control over the hair styles (eg, lengths) and dynamics (eg, softness) without requiring re-styling or re-simulation,(iii) it deals with hair-body collisions explicitly using optimization in the low-dimensional reduced space,(iv) it allows modeling of external phenomena (eg, wind). Our method builds on the recent success of reduced models for clothing and fluid simulation, but extends them in a number of significant ways. We model motion of hair in a conditional reduced sub-space, where the hair basis vectors, which encode dynamics, are linear functions of userspecified hair parameters. We formulate collision handling as an optimization in this reduced sub-space using fast iterative least squares. We demonstrate our method by building dynamic, user-controlled models of hair styles.
BibTeX
@conference{Guan-2012-121994,author = {Peng Guan and Leonid Sigal and Valeria Reznitskaya and Jessica K. Hodgins},
title = {Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling},
booktitle = {Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '12)},
year = {2012},
month = {July},
pages = {295 - 304},
}