Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > KEYWORD > PAC LEARNING:
Reports tagged with PAC learning:
TR99-005 | 21st December 1998
Michael Schmitt

On the Sample Complexity for Nonoverlapping Neural Networks

A neural network is said to be nonoverlapping if there is at most one
edge outgoing from each node. We investigate the number of examples
that a learning algorithm needs when using nonoverlapping neural
networks as hypotheses. We derive bounds for this sample complexity
in terms of the Vapnik-Chervonenkis dimension. ... more >>>


TR01-006 | 18th October 2000
Rocco Servedio

On Learning Monotone DNF under Product Distributions

We show that the class of monotone $2^{O(\sqrt{\log n})}$-term DNF
formulae can be PAC learned in polynomial time under the uniform
distribution. This is an exponential improvement over previous
algorithms in this model, which could learn monotone
$o(\log^2 n)$-term DNF, and is the first efficient algorithm
for ... more >>>


TR02-060 | 15th July 2002
Ke Yang

New Lower Bounds for Statistical Query Learning

We prove two lower bounds on the Statistical Query (SQ) learning
model. The first lower bound is on weak-learning. We prove that for a
concept class of SQ-dimension $d$, a running time of
$\Omega(d/\log d)$ is needed. The SQ-dimension of a concept class is
defined to be the maximum number ... more >>>


TR06-057 | 19th April 2006
Adam Klivans, Alexander A. Sherstov

Cryptographic Hardness Results for Learning Intersections of Halfspaces

We give the first representation-independent hardness results for
PAC learning intersections of halfspaces, a central concept class
in computational learning theory. Our hardness results are derived
from two public-key cryptosystems due to Regev, which are based on the
worst-case hardness of well-studied lattice problems. Specifically, we
prove that a polynomial-time ... more >>>


TR06-106 | 18th August 2006
Scott Aaronson

The Learnability of Quantum States

Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n. But using ideas from computational learning theory, we show that "for most practical purposes" one can learn a state using a number of measurements that grows only linearly with n. Besides possible ... more >>>


TR08-091 | 10th September 2008
Vitaly Feldman

On The Power of Membership Queries in Agnostic Learning

Revisions: 1

We study the properties of the agnostic learning framework of Haussler (1992)and Kearns, Schapire and Sellie (1992). In particular, we address the question: is there any situation in which membership queries are useful in agnostic learning?

Our results show that the answer is negative for distribution-independent agnostic learning and positive ... more >>>


TR09-098 | 9th October 2009
Alexander A. Sherstov

The intersection of two halfspaces has high threshold degree

Revisions: 1

The threshold degree of a Boolean function
$f\colon\{0,1\}\to\{-1,+1\}$ is the least degree of a real
polynomial $p$ such $f(x)\equiv\mathrm{sgn}\; p(x).$ We
construct two halfspaces on $\{0,1\}^n$ whose intersection has
threshold degree $\Theta(\sqrt n),$ an exponential improvement on
previous lower bounds. This solves an open problem due to Klivans
(2002) and ... more >>>


TR10-025 | 24th February 2010
Alexander A. Sherstov

Optimal bounds for sign-representing the intersection of two halfspaces by polynomials

The threshold degree of a function
$f\colon\{0,1\}^n\to\{-1,+1\}$ is the least degree of a
real polynomial $p$ with $f(x)\equiv\mathrm{sgn}\; p(x).$ We
prove that the intersection of two halfspaces on
$\{0,1\}^n$ has threshold degree $\Omega(n),$ which
matches the trivial upper bound and completely answers
a question due to Klivans (2002). The best ... more >>>


TR14-029 | 4th March 2014
Oded Goldreich, Dana Ron

On Learning and Testing Dynamic Environments

Revisions: 3

We initiate a study of learning and testing dynamic environments,
focusing on environment that evolve according to a fixed local rule.
The (proper) learning task consists of obtaining the initial configuration
of the environment, whereas for non-proper learning it suffices to predict
its future values. The testing task consists of ... more >>>


TR14-058 | 20th April 2014
Ilya Volkovich

On Learning, Lower Bounds and (un)Keeping Promises

We extend the line of research initiated by Fortnow and Klivans \cite{FortnowKlivans09} that studies the relationship between efficient learning algorithms and circuit lower bounds. In \cite{FortnowKlivans09}, it was shown that if a Boolean circuit class $\mathcal{C}$ has an efficient \emph{deterministic} exact learning algorithm, (i.e. an algorithm that uses membership and ... more >>>


TR14-144 | 30th October 2014
Eric Blais, Clement Canonne, Igor Carboni Oliveira, Rocco Servedio, Li-Yang Tan

Learning circuits with few negations

Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and ... more >>>


TR15-040 | 24th March 2015
Shay Moran, Amir Yehudayoff

Proper PAC learning is compressing

Revisions: 1

We prove that proper PAC learnability implies compression. Namely, if a concept $C \subseteq \Sigma^X$ is properly PAC learnable with $d$ samples, then $C$ has a sample compression scheme of size $2^{O(d)}$.
In particular, every boolean concept class with constant VC dimension has a sample compression scheme of constant size. ... more >>>


TR16-113 | 22nd July 2016
Gillat Kol, Ran Raz, Avishay Tal

Time-Space Hardness of Learning Sparse Parities

We define a concept class ${\cal F}$ to be time-space hard (or memory-samples hard) if any learning algorithm for ${\cal F}$ requires either a memory of size super-linear in $n$ or a number of samples super-polynomial in $n$, where $n$ is the length of one sample.

A recent work shows ... more >>>


TR17-017 | 5th February 2017
Michal Moshkovitz, Dana Moshkovitz

Mixing Implies Lower Bounds for Space Bounded Learning

One can learn any hypothesis class $H$ with $O(\log|H|)$ labeled examples. Alas, learning with so few examples requires saving the examples in memory, and this requires $|X|^{O(\log|H|)}$ memory states, where $X$ is the set of all labeled examples. A question that arises is how many labeled examples are needed in ... more >>>


TR17-116 | 5th July 2017
Michal Moshkovitz, Dana Moshkovitz

Mixing Implies Strong Lower Bounds for Space Bounded Learning

With any hypothesis class one can associate a bipartite graph whose vertices are the hypotheses H on one side and all possible labeled examples X on the other side, and an hypothesis is connected to all the labeled examples that are consistent with it. We call this graph the hypotheses ... more >>>


TR17-121 | 31st July 2017
Sumegha Garg, Ran Raz, Avishay Tal

Extractor-Based Time-Space Lower Bounds for Learning

Revisions: 1

A matrix $M: A \times X \rightarrow \{-1,1\}$ corresponds to the following learning problem: An unknown element $x \in X$ is chosen uniformly at random. A learner tries to learn $x$ from a stream of samples, $(a_1, b_1), (a_2, b_2) \ldots$, where for every $i$, $a_i \in A$ is chosen ... more >>>


TR19-071 | 14th May 2019
Sumegha Garg, Ran Raz, Avishay Tal

Time-Space Lower Bounds for Two-Pass Learning

A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [Raz16,KRT17,Raz17,MM18,BOGY18,GRT18]. For example, any algorithm for learning parities of size $n$ requires either a memory of size $\Omega(n^{2})$ or an exponential number ... more >>>


TR19-123 | 12th September 2019
Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

On the Hardness of Robust Classification

It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. ... more >>>


TR19-166 | 20th November 2019
Guy Blanc, Jane Lange, Li-Yang Tan

Top-down induction of decision trees: rigorous guarantees and inherent limitations

Consider the following heuristic for building a decision tree for a function $f : \{0,1\}^n \to \{\pm 1\}$. Place the most influential variable $x_i$ of $f$ at the root, and recurse on the subfunctions $f_{x_i=0}$ and $f_{x_i=1}$ on the left and right subtrees respectively; terminate once the tree is an ... more >>>


TR20-058 | 24th April 2020
Shafi Goldwasser, Guy Rothblum, Jonathan Shafer, Amir Yehudayoff

Interactive Proofs for Verifying Machine Learning

Revisions: 1

We consider the following question: using a source of labeled data and interaction with an untrusted prover, what is the complexity of verifying that a given hypothesis is "approximately correct"? We study interactive proof systems for PAC verification, where a verifier that interacts with a prover is required to accept ... more >>>


TR20-185 | 1st December 2020
Srinivasan Arunachalam, Alex Grilo, Tom Gur, Igor Oliveira, Aarthi Sundaram

Quantum learning algorithms imply circuit lower bounds

Revisions: 1

We establish the first general connection between the design of quantum algorithms and circuit lower bounds. Specifically, let $\mathrm{C}$ be a class of polynomial-size concepts, and suppose that $\mathrm{C}$ can be PAC-learned with membership queries under the uniform distribution with error $1/2 - \gamma$ by a time $T$ quantum algorithm. ... more >>>


TR21-179 | 8th December 2021
tatsuie tsukiji

Smoothed Complexity of Learning Disjunctive Normal Forms, Inverting Fourier Transforms, and Verifying Small Circuits

Comments: 1

This paper aims to derandomize the following problems in the smoothed analysis of Spielman and Teng. Learn Disjunctive Normal Form (DNF), invert Fourier Transforms (FT), and verify small circuits' unsatisfiability. Learning algorithms must predict a future observation from the only $m$ i.i.d. samples of a fixed but unknown joint-distribution $P(G(x),y)$ ... more >>>


TR22-035 | 7th March 2022
Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, Amir Yehudayoff

A Characterization of Multiclass Learnability

A seminal result in learning theory characterizes the PAC learnability of binary classes through the Vapnik-Chervonenkis dimension. Extending this characterization to the general multiclass setting has been open since the pioneering works on multiclass PAC learning in the late 1980s. This work resolves this problem: we characterize multiclass PAC learnability ... more >>>


TR22-119 | 24th August 2022
Shuichi Hirahara

NP-Hardness of Learning Programs and Partial MCSP

A long-standing open question in computational learning theory is to prove NP-hardness of learning efficient programs, the setting of which is in between proper learning and improper learning. Ko (COLT'90, SICOMP'91) explicitly raised this open question and demonstrated its difficulty by proving that there exists no relativizing proof of NP-hardness ... more >>>


TR22-164 | 20th November 2022
Shuichi Hirahara, Mikito Nanashima

Learning versus Pseudorandom Generators in Constant Parallel Time

A polynomial-stretch pseudorandom generator (PPRG) in NC$^0$ (i.e., constant parallel time) is one of the most important cryptographic primitives, especially for constructing highly efficient cryptography and indistinguishability obfuscation. The celebrated work (Applebaum, Ishai, and Kushilevitz, SIAM Journal on Computing, 2006) on randomized encodings yields the characterization of sublinear-stretch pseudorandom generators ... more >>>


TR22-178 | 8th December 2022
Ilias Diakonikolas, Christos Tzamos, Daniel Kane

A Strongly Polynomial Algorithm for Approximate Forster Transforms and its Application to Halfspace Learning

The Forster transform is a method of regularizing a dataset
by placing it in {\em radial isotropic position}
while maintaining some of its essential properties.
Forster transforms have played a key role in a diverse range of settings
spanning computer science and functional analysis. Prior work had given
{\em ... more >>>


TR23-085 | 4th June 2023
Ari Karchmer

Average-Case PAC-Learning from Nisan's Natural Proofs

Revisions: 2

Carmosino et al. (2016) demonstrated that natural proofs of circuit lower bounds imply algorithms for learning circuits with membership queries over the uniform distribution. Indeed, they exercised this implication to obtain a quasi-polynomial time learning algorithm for ${AC}^0[p]$ circuits, for any prime $p$, by leveraging the existing natural proofs from ... more >>>


TR23-142 | 21st September 2023
Tom Gur, Wilfred Salmon, Sergii Strelchuk

Provable Advantage in Quantum PAC Learning

We revisit the problem of characterising the complexity of Quantum PAC learning, as introduced by Bshouty and Jackson [SIAM J. Comput.
1998, 28, 1136–1153]. Several quantum advantages have been demonstrated in this setting, however, none are generic: they apply to particular concept classes and typically only work when the distribution ... more >>>




ISSN 1433-8092 | Imprint