LBS autoencoder: Self-supervised fitting of articulated meshes to point clouds - Robotics Institute Carnegie Mellon University

LBS autoencoder: Self-supervised fitting of articulated meshes to point clouds

Chun-Liang Li, Tomas Simon, Jason Saragih, Barnabás Póczos, and Yaser Sheikh
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 11959 - 11968, June, 2019

Abstract

We present LBS-AE; a self-supervised autoencoding algorithm for fitting articulated mesh models to point clouds. As input, we take a sequence of point clouds to be registered as well as an artist-rigged mesh, ie a template mesh equipped with a linear-blend skinning (LBS) deformation space parameterized by a skeleton hierarchy. As output, we learn an LBS-based autoencoder that produces registered meshes from the input point clouds. To bridge the gap between the artist-defined geometry and the captured point clouds, our autoencoder models pose-dependent deviations from the template geometry. During training, instead of us-ing explicit correspondences, such as key points or pose supervision, our method leverages LBS deformations to boot-strap the learning process. To avoid poor local minima from erroneous point-to-point correspondences, we utilize a structured Chamfer distance based on part-segmentations, which are learned concurrently using self-supervision. We demonstrate qualitative results on real captured hands, and report quantitative evaluations on the FAUST benchmark for body registration. Our method achieves performance that is superior to other unsupervised approaches and com-parable to methods using supervised examples.

BibTeX

@conference{Li-2019-122170,
author = {Chun-Liang Li and Tomas Simon and Jason Saragih and Barnabás Póczos and Yaser Sheikh},
title = {LBS autoencoder: Self-supervised fitting of articulated meshes to point clouds},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2019},
month = {June},
pages = {11959 - 11968},
}