Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR06-059 | 3rd May 2006 00:00

New Results for Learning Noisy Parities and Halfspaces

RSS-Feed

Abstract:

We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise.

Learning of parities under the uniform distribution with random classification noise,also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities, thus highlighting the central role of this problem for learning under the uniform distribution. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al., this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta.

We then consider the problem of learning halfspaces over Q^n with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples.Finding the best halfspace is known to be NP-hard and many inapproximability results are known for this problem. We show that even if there is a halfspace that correctly classifies $1 -\epsilon$ fraction of the given examples, it is hard to find a halfspace that is correct on a $1/2 + \epsilon$ fraction for any $\epsilon > 0$ assuming P \neq NP. This gives an essentially optimal inapproximability factor of $2-\epsilon$, improving the factor of $85/84 - \eps$ due to Bshouty and Burroughs.

Finally, we prove that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the security of the Ajtai-Dwork cryptosystem. We show that this result implies that learning halfspaces with high levels of adversarial noise is hard, independent of the representation of the hypothesis.



ISSN 1433-8092 | Imprint