Generalization Bounds for Transfer Learning under Model Shift - Robotics Institute Carnegie Mellon University

Generalization Bounds for Transfer Learning under Model Shift

X. Wang and J. Schneider
Conference Paper, Proceedings of 31st Conference on Uncertainty in Artificial Intelligence (UAI '15), pp. 922 - 931, July, 2015

Abstract

Transfer learning (sometimes also referred to as domain-adaptation) algorithms are often used when one tries to apply a model learned from a fully labeled source domain, to an unlabeled target domain, that is similar but not identical to the source. Previous work on covariate shift focuses on matching the marginal distributions on observations X across domains while assuming the conditional distribution P(Y|X) stays the same. Relevant theory focusing on covariate shift has also been developed. Recent work on transfer learning under model shift deals with different conditional distributions P(Y|X) across domains with a few target labels, while assuming the changes are smooth. However, no analysis has been provided to say when these algorithms work. In this paper, we analyze transfer learning algorithms under the model shift assumption. Our analysis shows that when the conditional distribution changes, we are able to obtain a generalization error bound of O(1/λ*√nl) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ*) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we extend the transfer learning algorithm from a single source to multiple sources.

BibTeX

@conference{-2015-119765,
author = {X. Wang and J. Schneider},
title = {Generalization Bounds for Transfer Learning under Model Shift},
booktitle = {Proceedings of 31st Conference on Uncertainty in Artificial Intelligence (UAI '15)},
year = {2015},
month = {July},
pages = {922 - 931},
}