Video Annotation and Tracking with Active Learning
Abstract
We introduce a novel active learning framework for video annotation. By judiciously choosing which frames a user should annotate, we can obtain highly accurate tracks with minimal user effort. We cast this problem as one of active learning, and show that we can obtain excellent performance by querying frames that, if annotated, would produce a large expected change in the estimated object track. We implement a constrained tracker and compute the expected change for putative annotations with efficient dynamic programming algorithms. We demonstrate our framework on four datasets, including two benchmark datasets constructed with key frame annotations obtained by Amazon Mechanical Turk. Our results indicate that we could obtain equivalent labels for a small fraction of the original cost.
BibTeX
@conference{Vondrick-2011-121210,author = {C. Vondrick and D. Ramanan},
title = {Video Annotation and Tracking with Active Learning},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2011},
month = {December},
pages = {28 - 36},
}