Fast Unsupervised Ego-Action Learning for First-person Sports Videos
Abstract
Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts. We address the novel task of discovering first-person action categories (which we call ego-actions) which can be useful for such tasks as video indexing and retrieval. In order to learn ego-action categories, we investigate the use of motion-based histograms and unsupervised learning algorithms to quickly cluster video content. Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown. In our proposed framework we show that a stacked Dirichlet process mixture model can be used to automatically learn a motion histogram codebook and the set of ego-action categories. We quantitatively evaluate our approach on both in-house and public YouTube videos and demonstrate robust ego-action categorization across several sports genres. Comparative analysis shows that our approach outperforms other state-of-the-art topic models with respect to both classification accuracy and computational speed. Preliminary results indicate that on average, the categorical content of a 10 minute video sequence can be indexed in under 5 seconds.
BibTeX
@conference{Kitani-2011-109824,author = {Kris M. Kitani and Takahiro Okabe and Yoichi Sato and Akihiro Sugimoto},
title = {Fast Unsupervised Ego-Action Learning for First-person Sports Videos},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2011},
month = {June},
pages = {3241 - 3248},
}