Weizmann Logo
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



TR17-114 | 1st July 2017 13:36

Proper Learning of k-term DNF Formulas from Satisfying Assignments



In certain applications there may only be positive samples available to
to learn concepts of a class of interest,
and this has to be done properly, i.e. the
hypothesis space has to coincide with the concept class,
and without false positives, i.e. the hypothesis always has be a subset of the real concept (one-sided error).
For the well studied class of k-term DNF formulas it has been known that
learning is difficult.
Unless RP = NP, it is not feasible to learn k-term DNF formulas properly in a distribution-free sense even if both positive and negative samples are available and even if false positives are allowed.

This paper constructs an efficient algorithm that for arbitrary fixed k,
if samples are drawn from distributions like uniform or q-bounded ones,
properly learns the class of k-term DNFs
without false positives from positive samples alone
with arbitrarily small relative error.

ISSN 1433-8092 | Imprint