On Two Methods for Semi-Supervised Structured Prediction
Abstract
Obtaining labeled data for training classifiers is an expensive task that must be done in any new application. It is yet even more expensive for structured models, such as Conditional Random Fields, where some notion of coherence in the labeled data must be maintained. To address this issue, semi-supervised methods are often applied to reduce the needed number of labeled examples to train adequate models. Previous work in this area have resulted in complex training procedures that do not scale to handle large amounts of examples, features, labels, and interactions that are necessary for vision tasks. In this paper, we present and analyze two novel approaches for semi-supervised training of structured models that can satisfy the above requirements. While we unfortunately do not observe significant benefit from using unlabeled data in our real-world experiments, the simple algorithms we present here may be useful in other applications where the necessary assumptions are satisfied.
BibTeX
@techreport{Munoz-2010-10387,author = {Daniel Munoz and J. Andrew (Drew) Bagnell and Martial Hebert},
title = {On Two Methods for Semi-Supervised Structured Prediction},
year = {2010},
month = {January},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-10-02},
}