Nader Bshouty

We present a new approach to the composition

of learning algorithms (in various models) for

classes of constant VC-dimension into learning algorithms for

more complicated classes.

We prove that if a class $\CC$ is learnable

in time $t$ from a hypothesis class $\HH$ of constant VC-dimension

then the class ...
more >>>

Lance Fortnow, Russell Impagliazzo, Chris Umans

We study the complexity of solving succinct zero-sum games,

i.e., the

games whose payoff matrix $M$ is given implicitly by a Boolean circuit

$C$ such that $M(i,j)=C(i,j)$. We complement the known $\EXP$-hardness

of computing the \emph{exact} value of a succinct zero-sum game by

several results on \emph{approximating} the value. (1) ...
more >>>

John Case, Sanjay Jain, RĂ¼diger Reischuk, Frank Stephan, Thomas Zeugmann

Presented is an algorithm (for learning a subclass of erasing regular

pattern languages) which

can be made to run with arbitrarily high probability of

success on extended regular languages generated by patterns

$\pi$ of the form $x_0 \alpha_1 x_1 ... \alpha_m x_m$

for unknown $m$ but known $c$,

more >>>

Scott Aaronson

Theoretical computer scientists have been debating the role of

oracles since the 1970's. This paper illustrates both that oracles

can give us nontrivial insights about the barrier problems in

circuit complexity, and that they need not prevent us from trying to

solve those problems.

First, we ... more >>>

Albert Atserias

We may believe SAT does not have small Boolean circuits.

But is it possible that some language with small circuits

looks indistiguishable from SAT to every polynomial-time

bounded adversary? We rule out this possibility. More

precisely, assuming SAT does not have small circuits, we

show that ...
more >>>

Venkatesan Guruswami, Prasad Raghavendra

Learning an unknown halfspace (also called a perceptron) from

labeled examples is one of the classic problems in machine learning.

In the noise-free case, when a halfspace consistent with all the

training examples exists, the problem can be solved in polynomial

time using linear programming. ...
more >>>

Leslie G. Valiant

Living cells function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as a result of a random search guided by selection and realized by genetic mutations. However, as some observers have noted, there has existed no theory that would ... more >>>

Ilias Diakonikolas, Homin Lee, Kevin Matulef, Krzysztof Onak, Ronitt Rubinfeld, Rocco Servedio, Andrew Wan

We describe a general method for testing whether a function on n input variables has a concise representation. The approach combines ideas from the junta test of Fischer et al. with ideas from learning theory, and yields property testers that make poly(s/epsilon) queries (independent of n) for Boolean function classes ... more >>>

Christian Borgs, Jennifer Chayes, Nicole Immorlica, Adam Kalai, Vahab Mirrokni, Christos H. Papadimitriou

The folk theorem suggests that finding Nash Equilibria

in repeated games should be easier than in one-shot games. In

contrast, we show that the problem of finding any (epsilon) Nash

equilibrium for a three-player infinitely-repeated game is

computationally intractable (even when all payoffs are in

{-1,0,-1}), unless all of PPAD ...
more >>>

David Xiao

Learning is a central task in computer science, and there are various

formalisms for capturing the notion. One important model studied in

computational learning theory is the PAC model of Valiant (CACM 1984).

On the other hand, in cryptography the notion of ``learning nothing''

is often modelled by the simulation ...
more >>>

Scott Aaronson, Andrew Drucker

We prove the following surprising result: given any quantum state rho on n qubits, there exists a local Hamiltonian H on poly(n) qubits (e.g., a sum of two-qubit interactions), such that any ground state of H can be used to simulate rho on all quantum circuits of fixed polynomial size. ... more >>>

Valentine Kabanets, Antonina Kolokolova

We consider the problem of compression for ``easy'' Boolean functions: given the truth table of an $n$-variate Boolean function $f$ computable by some \emph{unknown small circuit} from a \emph{known class} of circuits, find in deterministic time $\poly(2^n)$ a circuit $C$ (no restriction on the type of $C$) computing $f$ so ... more >>>

Adam Klivans, Pravesh Kothari, Igor Oliveira

Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class $\mathcal{C} \subseteq P/poly$ of Boolean circuits is exactly learnable with membership and equivalence queries in polynomial-time, then $EXP^{NP} \not \subseteq \mathcal{C}$ (the class $EXP^{NP}$ was subsequently improved to $P$ by Hitchcock and ... more >>>

Ilya Volkovich

An \emph{arithmetic circuit} is a directed acyclic graph in which the operations are $\{+,\times\}$.

In this paper, we exhibit several connections between learning algorithms for arithmetic circuits and other problems.

In particular, we show that:

\begin{enumerate}

\item Efficient learning algorithms for arithmetic circuit classes imply explicit exponential lower bounds.

Marco Carmosino, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova

Circuit analysis algorithms such as learning, SAT, minimum circuit size, and compression imply circuit lower bounds. We show a generic implication in the opposite direction: natural properties (in the sense of Razborov and Rudich) imply randomized learning and compression algorithms. This is the first such implication outside of the derandomization ... more >>>

Ran Raz

We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples.

Our result is stated in terms of the norm ... more >>>

Daniel Kane, Roi Livni, Shay Moran, Amir Yehudayoff

This work introduces a model of distributed learning in the spirit of Yao's communication complexity model. We consider a two-party setting, where each of the players gets a list of labelled examples and they communicate in order to jointly perform some learning task. To naturally fit into the framework of ... more >>>

Paul Beame, Shayan Oveis Gharan, Xin Yang

We develop an extension of recent analytic methods for obtaining time-space tradeoff lower bounds for problems of learning from uniformly random labelled examples. With our methods we can obtain bounds for learning concept classes of finite functions from random evaluations even when the sample space of random inputs can be ... more >>>

Igor Carboni Oliveira, Rahul Santhanam

We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser

[GG11]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed

output with high probability. We explore pseudo-determinism in the settings of learning and ap-

proximation. Our goal is to simulate known randomized algorithms in these settings ...
more >>>

Vishwas Bhargava, Shubhangi Saraf, Ilya Volkovich

We present a deterministic algorithm for reconstructing multilinear $\Sigma\Pi\Sigma\Pi(k)$ circuits, i.e. multilinear depth-$4$ circuits with fan-in $k$ at the top $+$ gate. For any fixed $k$, given black-box access to a polynomial $f \in \mathbb{F}[x_{1},x_{2},\ldots ,x_{n}]$ computable by a multilinear $\Sigma\Pi\Sigma\Pi(k)$ circuit of size $s$, the algorithm runs in time ... more >>>

Rahul Santhanam

We explore the possibility of basing one-way functions on the average-case hardness of the fundamental Minimum Circuit Size Problem (MCSP[$s$]), which asks whether a Boolean function on $n$ bits specified by its truth table has circuits of size $s(n)$.

1. (Pseudorandomness from Zero-Error Average-Case Hardness) We show that for ... more >>>

Suprovat Ghoshal, Rishi Saket

The problem of learning $t$-term DNF formulas (for $t = O(1)$) has been studied extensively in the PAC model since its introduction by Valiant (STOC 1984). A $t$-term DNF can be efficiently learnt using a $t$-term DNF only if $t = 1$ i.e., when it is an AND, while even ... more >>>