Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > KEYWORD > LEARNING:
Reports tagged with learning:
TR98-013 | 3rd March 1998
Nader Bshouty

A New Composition Theorem for Learning Algorithms


We present a new approach to the composition
of learning algorithms (in various models) for
classes of constant VC-dimension into learning algorithms for
more complicated classes.
We prove that if a class $\CC$ is learnable
in time $t$ from a hypothesis class $\HH$ of constant VC-dimension
then the class ... more >>>


TR04-001 | 11th December 2003
Lance Fortnow, Russell Impagliazzo, Chris Umans

On the complexity of succinct zero-sum games

We study the complexity of solving succinct zero-sum games,
i.e., the
games whose payoff matrix $M$ is given implicitly by a Boolean circuit
$C$ such that $M(i,j)=C(i,j)$. We complement the known $\EXP$-hardness
of computing the \emph{exact} value of a succinct zero-sum game by
several results on \emph{approximating} the value. (1) ... more >>>


TR04-038 | 27th April 2004
John Case, Sanjay Jain, RĂ¼diger Reischuk, Frank Stephan, Thomas Zeugmann

A Polynomial Time Learner for a Subclass of Regular Patterns

Presented is an algorithm (for learning a subclass of erasing regular
pattern languages) which
can be made to run with arbitrarily high probability of
success on extended regular languages generated by patterns
$\pi$ of the form $x_0 \alpha_1 x_1 ... \alpha_m x_m$
for unknown $m$ but known $c$,
more >>>


TR05-040 | 13th April 2005
Scott Aaronson

Oracles Are Subtle But Not Malicious

Theoretical computer scientists have been debating the role of
oracles since the 1970's. This paper illustrates both that oracles
can give us nontrivial insights about the barrier problems in
circuit complexity, and that they need not prevent us from trying to
solve those problems.

First, we ... more >>>


TR05-154 | 11th December 2005
Albert Atserias

Non-Uniform Hardness for NP via Black-Box Adversaries

We may believe SAT does not have small Boolean circuits.
But is it possible that some language with small circuits
looks indistiguishable from SAT to every polynomial-time
bounded adversary? We rule out this possibility. More
precisely, assuming SAT does not have small circuits, we
show that ... more >>>


TR06-061 | 5th May 2006
Venkatesan Guruswami, Prasad Raghavendra

Hardness of Learning Halfspaces with Noise

Learning an unknown halfspace (also called a perceptron) from
labeled examples is one of the classic problems in machine learning.
In the noise-free case, when a halfspace consistent with all the
training examples exists, the problem can be solved in polynomial
time using linear programming. ... more >>>


TR06-120 | 12th September 2006
Leslie G. Valiant

Evolvability

Living cells function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as a result of a random search guided by selection and realized by genetic mutations. However, as some observers have noted, there has existed no theory that would ... more >>>


TR07-077 | 7th August 2007
Ilias Diakonikolas, Homin Lee, Kevin Matulef, Krzysztof Onak, Ronitt Rubinfeld, Rocco Servedio, Andrew Wan

Testing for Concise Representations

We describe a general method for testing whether a function on n input variables has a concise representation. The approach combines ideas from the junta test of Fischer et al. with ideas from learning theory, and yields property testers that make poly(s/epsilon) queries (independent of n) for Boolean function classes ... more >>>


TR07-082 | 27th July 2007
Christian Borgs, Jennifer Chayes, Nicole Immorlica, Adam Kalai, Vahab Mirrokni, Christos H. Papadimitriou

The Myth of the Folk Theorem

The folk theorem suggests that finding Nash Equilibria
in repeated games should be easier than in one-shot games. In
contrast, we show that the problem of finding any (epsilon) Nash
equilibrium for a three-player infinitely-repeated game is
computationally intractable (even when all payoffs are in
{-1,0,-1}), unless all of PPAD ... more >>>


TR09-006 | 19th January 2009
David Xiao

On basing ZK != BPP on the hardness of PAC learning

Learning is a central task in computer science, and there are various
formalisms for capturing the notion. One important model studied in
computational learning theory is the PAC model of Valiant (CACM 1984).
On the other hand, in cryptography the notion of ``learning nothing''
is often modelled by the simulation ... more >>>


TR10-057 | 1st April 2010
Scott Aaronson, Andrew Drucker

A Full Characterization of Quantum Advice

Revisions: 3

We prove the following surprising result: given any quantum state rho on n qubits, there exists a local Hamiltonian H on poly(n) qubits (e.g., a sum of two-qubit interactions), such that any ground state of H can be used to simulate rho on all quantum circuits of fixed polynomial size. ... more >>>


TR13-024 | 7th February 2013
Valentine Kabanets, Antonina Kolokolova

Compression of Boolean Functions

We consider the problem of compression for ``easy'' Boolean functions: given the truth table of an $n$-variate Boolean function $f$ computable by some \emph{unknown small circuit} from a \emph{known class} of circuits, find in deterministic time $\poly(2^n)$ a circuit $C$ (no restriction on the type of $C$) computing $f$ so ... more >>>


TR13-129 | 17th September 2013
Adam Klivans, Pravesh Kothari, Igor Oliveira

Constructing Hard Functions from Learning Algorithms

Revisions: 1

Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class $\mathcal{C} \subseteq P/poly$ of Boolean circuits is exactly learnable with membership and equivalence queries in polynomial-time, then $EXP^{NP} \not \subseteq \mathcal{C}$ (the class $EXP^{NP}$ was subsequently improved to $P$ by Hitchcock and ... more >>>


TR15-115 | 20th July 2015
Ilya Volkovich

A Guide to Learning Arithmetic Circuits

An \emph{arithmetic circuit} is a directed acyclic graph in which the operations are $\{+,\times\}$.
In this paper, we exhibit several connections between learning algorithms for arithmetic circuits and other problems.
In particular, we show that:

\begin{enumerate}
\item Efficient learning algorithms for arithmetic circuit classes imply explicit exponential lower bounds.

... more >>>

TR16-008 | 26th January 2016
Marco Carmosino, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova

Algorithms from Natural Lower Bounds

Circuit analysis algorithms such as learning, SAT, minimum circuit size, and compression imply circuit lower bounds. We show a generic implication in the opposite direction: natural properties (in the sense of Razborov and Rudich) imply randomized learning and compression algorithms. This is the first such implication outside of the derandomization ... more >>>


TR17-020 | 12th February 2017
Ran Raz

A Time-Space Lower Bound for a Large Class of Learning Problems

We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples.

Our result is stated in terms of the norm ... more >>>


TR17-177 | 16th November 2017
Daniel Kane, Roi Livni, Shay Moran, Amir Yehudayoff

On Communication Complexity of Classification Problems

Revisions: 1

This work introduces a model of distributed learning in the spirit of Yao's communication complexity model. We consider a two-party setting, where each of the players gets a list of labelled examples and they communicate in order to jointly perform some learning task. To naturally fit into the framework of ... more >>>


TR18-114 | 6th June 2018
Paul Beame, Shayan Oveis Gharan, Xin Yang

Time-Space Tradeoffs for Learning Finite Functions from Random Evaluations, with Applications to Polynomials

We develop an extension of recent analytic methods for obtaining time-space tradeoff lower bounds for problems of learning from uniformly random labelled examples. With our methods we can obtain bounds for learning concept classes of finite functions from random evaluations even when the sample space of random inputs can be ... more >>>


TR18-122 | 3rd July 2018
Igor Carboni Oliveira, Rahul Santhanam

Pseudo-derandomizing learning and approximation

We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser
[GG11]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed
output with high probability. We explore pseudo-determinism in the settings of learning and ap-
proximation. Our goal is to simulate known randomized algorithms in these settings ... more >>>


TR19-104 | 6th August 2019
Vishwas Bhargava, Shubhangi Saraf, Ilya Volkovich

Reconstruction of Depth-$4$ Multilinear Circuits

We present a deterministic algorithm for reconstructing multilinear $\Sigma\Pi\Sigma\Pi(k)$ circuits, i.e. multilinear depth-$4$ circuits with fan-in $k$ at the top $+$ gate. For any fixed $k$, given black-box access to a polynomial $f \in \mathbb{F}[x_{1},x_{2},\ldots ,x_{n}]$ computable by a multilinear $\Sigma\Pi\Sigma\Pi(k)$ circuit of size $s$, the algorithm runs in time ... more >>>


TR19-155 | 6th November 2019
Rahul Santhanam

Pseudorandomness and the Minimum Circuit Size Problem

We explore the possibility of basing one-way functions on the average-case hardness of the fundamental Minimum Circuit Size Problem (MCSP[$s$]), which asks whether a Boolean function on $n$ bits specified by its truth table has circuits of size $s(n)$.

1. (Pseudorandomness from Zero-Error Average-Case Hardness) We show that for ... more >>>


TR19-161 | 13th November 2019
Suprovat Ghoshal, Rishi Saket

Hardness of Learning DNFs using Halfspaces

The problem of learning $t$-term DNF formulas (for $t = O(1)$) has been studied extensively in the PAC model since its introduction by Valiant (STOC 1984). A $t$-term DNF can be efficiently learnt using a $t$-term DNF only if $t = 1$ i.e., when it is an AND, while even ... more >>>


TR21-161 | 16th November 2021
Shuichi Hirahara, Mikito Nanashima

On Worst-Case Learning in Relativized Heuristica

A PAC learning model involves two worst-case requirements: a learner must learn all functions in a class on all example distributions. However, basing the hardness of learning on NP-hardness has remained a key challenge for decades. In fact, recent progress in computational complexity suggests the possibility that a weaker assumption ... more >>>


TR22-072 | 15th May 2022
Halley Goldberg, Valentine Kabanets, Zhenjian Lu, Igor Oliveira

Probabilistic Kolmogorov Complexity with Applications to Average-Case Complexity

Understanding the relationship between the worst-case and average-case complexities of $\mathrm{NP}$ and of other subclasses of $\mathrm{PH}$ is a long-standing problem in complexity theory. Over the last few years, much progress has been achieved in this front through the investigation of meta-complexity: the complexity of problems that refer to the ... more >>>


TR23-080 | 1st June 2023
Halley Goldberg, Valentine Kabanets

Improved Learning from Kolmogorov Complexity

Carmosino, Impagliazzo, Kabanets, and Kolokolova (CCC, 2016) showed that the existence of natural properties in the sense of Razborov and Rudich (JCSS, 1997) implies PAC learning algorithms in the sense of Valiant (Comm. ACM, 1984), for boolean functions in $\P/\poly$, under the uniform distribution and with membership queries. It is ... more >>>


TR23-100 | 6th July 2023
Shuichi Hirahara, Mikito Nanashima

Learning in Pessiland via Inductive Inference

Pessiland is one of Impagliazzo's five possible worlds in which NP is hard on average, yet no one-way function exists. This world is considered the most pessimistic because it offers neither algorithmic nor cryptographic benefits.

In this paper, we develop a unified framework for constructing strong learning algorithms ... more >>>




ISSN 1433-8092 | Imprint