Hand Keypoint Detection in Single Images Using Multiview Bootstrapping - Robotics Institute Carnegie Mellon University

Hand Keypoint Detection in Single Images Using Multiview Bootstrapping

Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 4645 - 4653, July, 2017

Abstract

We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.

BibTeX

@conference{Simon-2017-122184,
author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},
title = {Hand Keypoint Detection in Single Images Using Multiview Bootstrapping},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2017},
month = {July},
pages = {4645 - 4653},
}