Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > KEYWORD > PAC LEARNING:
Reports tagged with PAC learning:
TR99-005 | 21st December 1998
Michael Schmitt

On the Sample Complexity for Nonoverlapping Neural Networks

A neural network is said to be nonoverlapping if there is at most one
edge outgoing from each node. We investigate the number of examples
that a learning algorithm needs when using nonoverlapping neural
networks as hypotheses. We derive bounds for this sample complexity
in terms of the Vapnik-Chervonenkis dimension. ... more >>>


TR01-006 | 18th October 2000
Rocco Servedio

On Learning Monotone DNF under Product Distributions

We show that the class of monotone $2^{O(\sqrt{\log n})}$-term DNF
formulae can be PAC learned in polynomial time under the uniform
distribution. This is an exponential improvement over previous
algorithms in this model, which could learn monotone
$o(\log^2 n)$-term DNF, and is the first efficient algorithm
for ... more >>>


TR02-060 | 15th July 2002
Ke Yang

New Lower Bounds for Statistical Query Learning

We prove two lower bounds on the Statistical Query (SQ) learning
model. The first lower bound is on weak-learning. We prove that for a
concept class of SQ-dimension $d$, a running time of
$\Omega(d/\log d)$ is needed. The SQ-dimension of a concept class is
defined to be the maximum number ... more >>>


TR06-057 | 19th April 2006
Adam Klivans, Alexander A. Sherstov

Cryptographic Hardness Results for Learning Intersections of Halfspaces

We give the first representation-independent hardness results for
PAC learning intersections of halfspaces, a central concept class
in computational learning theory. Our hardness results are derived
from two public-key cryptosystems due to Regev, which are based on the
worst-case hardness of well-studied lattice problems. Specifically, we
prove that a polynomial-time ... more >>>


TR06-106 | 18th August 2006
Scott Aaronson

The Learnability of Quantum States

Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n. But using ideas from computational learning theory, we show that "for most practical purposes" one can learn a state using a number of measurements that grows only linearly with n. Besides possible ... more >>>


TR08-091 | 10th September 2008
Vitaly Feldman

On The Power of Membership Queries in Agnostic Learning

Revisions: 1

We study the properties of the agnostic learning framework of Haussler (1992)and Kearns, Schapire and Sellie (1992). In particular, we address the question: is there any situation in which membership queries are useful in agnostic learning?

Our results show that the answer is negative for distribution-independent agnostic learning and positive ... more >>>


TR09-098 | 9th October 2009
Alexander A. Sherstov

The intersection of two halfspaces has high threshold degree

Revisions: 1

The threshold degree of a Boolean function
$f\colon\{0,1\}\to\{-1,+1\}$ is the least degree of a real
polynomial $p$ such $f(x)\equiv\mathrm{sgn}\; p(x).$ We
construct two halfspaces on $\{0,1\}^n$ whose intersection has
threshold degree $\Theta(\sqrt n),$ an exponential improvement on
previous lower bounds. This solves an open problem due to Klivans
(2002) and ... more >>>


TR10-025 | 24th February 2010
Alexander A. Sherstov

Optimal bounds for sign-representing the intersection of two halfspaces by polynomials

The threshold degree of a function
$f\colon\{0,1\}^n\to\{-1,+1\}$ is the least degree of a
real polynomial $p$ with $f(x)\equiv\mathrm{sgn}\; p(x).$ We
prove that the intersection of two halfspaces on
$\{0,1\}^n$ has threshold degree $\Omega(n),$ which
matches the trivial upper bound and completely answers
a question due to Klivans (2002). The best ... more >>>


TR14-029 | 4th March 2014
Oded Goldreich, Dana Ron

On Learning and Testing Dynamic Environments

Revisions: 3

We initiate a study of learning and testing dynamic environments,
focusing on environment that evolve according to a fixed local rule.
The (proper) learning task consists of obtaining the initial configuration
of the environment, whereas for non-proper learning it suffices to predict
its future values. The testing task consists of ... more >>>


TR14-058 | 20th April 2014
Ilya Volkovich

On Learning, Lower Bounds and (un)Keeping Promises

We extend the line of research initiated by Fortnow and Klivans \cite{FortnowKlivans09} that studies the relationship between efficient learning algorithms and circuit lower bounds. In \cite{FortnowKlivans09}, it was shown that if a Boolean circuit class $\mathcal{C}$ has an efficient \emph{deterministic} exact learning algorithm, (i.e. an algorithm that uses membership and ... more >>>


TR14-144 | 30th October 2014
Eric Blais, Clement Canonne, Igor Carboni Oliveira, Rocco Servedio, Li-Yang Tan

Learning circuits with few negations

Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and ... more >>>


TR15-040 | 24th March 2015
Shay Moran, Amir Yehudayoff

Proper PAC learning is compressing

Revisions: 1

We prove that proper PAC learnability implies compression. Namely, if a concept $C \subseteq \Sigma^X$ is properly PAC learnable with $d$ samples, then $C$ has a sample compression scheme of size $2^{O(d)}$.
In particular, every boolean concept class with constant VC dimension has a sample compression scheme of constant size. ... more >>>


TR16-113 | 22nd July 2016
Gillat Kol, Ran Raz, Avishay Tal

Time-Space Hardness of Learning Sparse Parities

We define a concept class ${\cal F}$ to be time-space hard (or memory-samples hard) if any learning algorithm for ${\cal F}$ requires either a memory of size super-linear in $n$ or a number of samples super-polynomial in $n$, where $n$ is the length of one sample.

A recent work shows ... more >>>


TR17-017 | 5th February 2017
Michal Moshkovitz, Dana Moshkovitz

Mixing Implies Lower Bounds for Space Bounded Learning

One can learn any hypothesis class $H$ with $O(\log|H|)$ labeled examples. Alas, learning with so few examples requires saving the examples in memory, and this requires $|X|^{O(\log|H|)}$ memory states, where $X$ is the set of all labeled examples. A question that arises is how many labeled examples are needed in ... more >>>


TR17-116 | 5th July 2017
Michal Moshkovitz, Dana Moshkovitz

Mixing Implies Strong Lower Bounds for Space Bounded Learning

With any hypothesis class one can associate a bipartite graph whose vertices are the hypotheses H on one side and all possible labeled examples X on the other side, and an hypothesis is connected to all the labeled examples that are consistent with it. We call this graph the hypotheses ... more >>>


TR17-121 | 31st July 2017
Sumegha Garg, Ran Raz, Avishay Tal

Extractor-Based Time-Space Lower Bounds for Learning

A matrix $M: A \times X \rightarrow \{-1,1\}$ corresponds to the following learning problem: An unknown element $x \in X$ is chosen uniformly at random. A learner tries to learn $x$ from a stream of samples, $(a_1, b_1), (a_2, b_2) \ldots$, where for every $i$, $a_i \in A$ is chosen ... more >>>




ISSN 1433-8092 | Imprint