Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text
Abstract
In this paper, we address the question of what kind of knowledge is generally transferable from unlabeled text. We suggest and analyze the semantic correlation of words as a generally transferable structure of the language and propose a new method to learn this structure using an appropriately chosen latent variable model. This semantic correlation contains structural information of the language space and can be used to control the joint shrinkage of model parameters for any specific task in the same space through regularization. In an empirical study, we construct 190 different text classification tasks from a real-world benchmark, and the unlabeled documents are a mixture from all these tasks. We test the ability of various algorithms to use the mixed unlabeled text to enhance all classification tasks. Empirical results show that the proposed approach is a reliable and scalable method for semi-supervised learning, regardless of the source of unlabeled data, the specific task to be enhanced, and the prediction model used.
BibTeX
@conference{Zhang-2008-119820,author = {Y. Zhang and J. Schneider and A. Dubrawski},
title = {Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2008},
month = {December},
pages = {1945 - 1952},
}