Recovering the Basic Structure of Human Activities From a Video-Based Symbol String
Abstract
In recent years stochastic context-free grammars have been shown to be effective in modeling human activities because of the hierarchical structures they represent. However, most of the research in this area has yet to address the issue of learning the activity grammars from a noisy input source, namely, video. In this paper, we present a framework for identifying noise and recovering the basic activity grammar from a noisy symbol string produced by video. We identify the noise symbols by finding the set of non-noise symbols that optimally compresses the training data, where the optimality of compression is measured using an MDL criterion. We show the robustness of our system to noise and its effectiveness in learning the basic structure of human activity, through an experiment with real video from a local convenience store.
BibTeX
@workshop{Kitani-2007-109832,author = {Kris M. Kitani and Yoichi Sato and Akihiro Sugimoto},
title = {Recovering the Basic Structure of Human Activities From a Video-Based Symbol String},
booktitle = {Proceedings of IEEE Workshop on Motion and Video Computing (WMVC '07)},
year = {2007},
month = {February},
}