Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling
Abstract
Obtaining large annotated datasets is critical for training successful machine learning models and it is often a bottleneck in practice. Weak supervision offers a promising alternative for producing labeled datasets without ground truth annotations by generating probabilistic labels using multiple noisy heuristics. This process can scale to large datasets and has demonstrated state of the art performance in diverse domains such as healthcare and e-commerce. One practical issue with learning from user-generated heuristics is that their creation requires creativity, foresight, and domain expertise from those who hand-craft them, a process which can be tedious and subjective. We develop the first framework for interactive weak supervision in which a method proposes heuristics and learns from user feedback given on each proposed heuristic. Our experiments demonstrate that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels. We conduct user studies, which show that users are able to effectively provide feedback on heuristics and that test set results track the performance of simulated oracles.
BibTeX
@conference{Boecking-2021-127215,author = {Benedikt Boecking and Willie Neiswanger and Eric P. Xing and Artur Dubrawski},
title = {Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling},
booktitle = {Proceedings of (ICLR) International Conference on Learning Representations},
year = {2021},
month = {May},
}