Automatic Annotation of Everyday Movements
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 1547 - 1554, December, 2003
Abstract
This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor's activity while in view. The system does not require a fixed background, and is automatic. The system works by (1) tracking people in 2D and then, using an annotated motion capture dataset, (2) synthesizing an annotated 3D motion sequence matching the 2D tracks. The 3D motion capture data is manually annotated off-line using a class structure that describes everyday motions and allows motion annotations to be composed — one may jump while running, for example. Descriptions computed from video of real motions show that the method is accurate.
BibTeX
@conference{Ramanan-2003-121233,author = {Deva Ramanan and David A. Forsyth},
title = {Automatic Annotation of Everyday Movements},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2003},
month = {December},
pages = {1547 - 1554},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.