Revision #1 Authors: Adam Klivans, Pravesh Kothari, Igor Oliveira

Accepted on: 17th September 2013 19:40

Downloads: 3819

Keywords:

Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class $\mathcal{C} \subseteq P/poly$ of Boolean circuits is exactly learnable with membership and equivalence queries in polynomial-time, then $EXP^{NP} \not \subseteq \mathcal{C}$ (the class $EXP^{NP}$ was subsequently improved to $EXP$ by Hitchcock and Harkins). In this paper, we improve on these results and show:

1. If $\mathcal{C}$ is exactly learnable with membership and equivalence queries in polynomial-time, then $DTIME(n^{\omega(1)}) \not \subseteq \mathcal{C}$. We obtain even stronger consequences if the class $\mathcal{C}$ is learnable in the mistake-bounded model, in which case we prove an average-case hardness result against $\mathcal{C}$.

2. If $\mathcal{C}$ is learnable in polynomial time in the PAC model then $PSPACE \not \subseteq \mathcal{C},$ unless $PSPACE \subseteq P$. Removing this extra assumption from the statement of the theorem would provide an unconditional proof that $PSPACE \not \subseteq P$.

3. If $\mathcal{C}$ is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function $f$ that is average-case hard for circuits in $\mathcal{C}$. This result provides stronger average-case hardness guarantees than those obtained by SQ-dimension arguments (Blum et al. 1993). We also obtain a non-constructive extension of this result to the stronger Statistical Query (SQ) model.

Similar results hold in the case where the learning algorithm runs in subexponential time.

Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to ``diagonalize'' over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-on-average functions.

Corrected typo in the abstract.

TR13-129 Authors: Adam Klivans, Pravesh Kothari, Igor Oliveira

Publication: 17th September 2013 13:27

Downloads: 3000

Keywords:

Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class $\mathcal{C} \subseteq P/poly$ of Boolean circuits is exactly learnable with membership and equivalence queries in polynomial-time, then $EXP^{NP} \not \subseteq \mathcal{C}$ (the class $EXP^{NP}$ was subsequently improved to $P$ by Hitchcock and Harkins). In this paper, we improve on these results and show:

1. If $\mathcal{C}$ is exactly learnable with membership and equivalence queries in polynomial-time, then $DTIME(n^{\omega(1)}) \not \subseteq \mathcal{C}$. We obtain even stronger consequences if the class $\mathcal{C}$ is learnable in the mistake-bounded model, in which case we prove an average-case hardness result against $\mathcal{C}$.

2. If $\mathcal{C}$ is learnable in polynomial time in the PAC model then $PSPACE \not \subseteq \mathcal{C},$ unless $PSPACE \subseteq P$. Removing this extra assumption from the statement of the theorem would provide an unconditional proof that $PSPACE \not \subseteq P$.

3. If $\mathcal{C}$ is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function $f$ that is average-case hard for circuits in $\mathcal{C}$. This result provides stronger average-case hardness guarantees than those obtained by SQ-dimension arguments (Blum et al. 1993). We also obtain a non-constructive extension of this result to the stronger Statistical Query (SQ) model.

Similar results hold in the case where the learning algorithm runs in subexponential time.

Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to ``diagonalize'' over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-on-average functions.