On learning embedded symmetric concepts
Abstract
An embedded symmetric concept is a concept symmetric on the set of relevant variables. The class of such concepts is a natural extension of several fundamental classes including Boolean thresholds, parity, and monotone monomials. In this paper, we characterize which types of embedded symmetric concepts are learnable in a representation-dependent sense, and which are not. In particular, we show that among `reasonable' symmetric concept subclasses only a very small set - constant, conjunction, disjunction, parity, consensus, and their negations - are PAC-learnable in a representation-dependent sense given standard complexity assumptions. We also address the problem of prediction, and provide a weak hardness result for prediction based on the difficulty of predicting a subclass of DNF formulas. On the positive side, we present prediction methods for certain subclasses of these concepts, including the modp concepts. We also show that many classes of embedded symmetric concepts have the property that over the uniform distribution, they are either strongly learnable or else indistinguishable from random. In addition, we define a more general class of multi-symmetric concepts and present an exact learning algorithm with membership queries for this class.
BibTeX
@conference{Blum-1993-15968,author = {Avrim Blum and Prasad Chalasani and Jeffrey Jackson},
title = {On learning embedded symmetric concepts},
booktitle = {Proceedings of 6th Annual ACM Conference on Computational Learning Theory (COLT '93)},
year = {1993},
month = {August},
pages = {337 - 346},
}