We study the complexity of building
pseudorandom generators (PRGs) from hard functions.
We show that, starting from a function f : {0,1}^l -> {0,1} that
is mildly hard on average, i.e. every circuit of size 2^Omega(l)
fails to compute f on at least a 1/poly(l)
fraction of inputs, we can ...
more >>>
We study pseudorandom generator (PRG) constructions $G^f : {0,1}^l \to {0,1}^{l+s}$ from one-way functions $f : {0,1}^n \to {0,1}^m$. We consider PRG constructions of the form $G^f(x) = C(f(q_{1}) \ldots f(q_{poly(n)}))$
where $C$ is a polynomial-size constant depth circuit
and $C$ and the $q$'s are generated from $x$ arbitrarily.
more >>>
We revisit the problem of hardness amplification in $\NP$, as
recently studied by O'Donnell (STOC `02). We prove that if $\NP$
has a balanced function $f$ such that any circuit of size $s(n)$
fails to compute $f$ on a $1/\poly(n)$ fraction of inputs, then
$\NP$ has a function $f'$ such ...
more >>>
We prove a version of the derandomized Direct Product Lemma for
deterministic space-bounded algorithms. Suppose a Boolean function
$g:\{0,1\}^n\to\{0,1\}$ cannot be computed on more than $1-\delta$
fraction of inputs by any deterministic time $T$ and space $S$
algorithm, where $\delta\leq 1/t$ for some $t$. Then, for $t$-step
walks $w=(v_1,\dots, v_t)$ ...
more >>>
We consider two of the most fundamental theorems in Cryptography. The first, due to Haastad et. al. [HILL99], is that pseudorandom generators can be constructed from any one-way function. The second due to Yao [Yao82] states that the existence of weak one-way functions (i.e. functions on which every efficient algorithm ... more >>>
We consider the problem of amplifying uniform average-case hardness
of languages in $\NP$, where hardness is with respect to $\BPP$
algorithms. We introduce the notion of \emph{monotone}
error-correcting codes, and show that hardness amplification for
$\NP$ is essentially equivalent to constructing efficiently
\emph{locally} encodable and \emph{locally} list-decodable monotone
codes. The ...
more >>>
We study the average-case hardness of the class NP against
deterministic polynomial time algorithms. We prove that there exists
some constant $\mu > 0$ such that if there is some language in NP
for which no deterministic polynomial time algorithm can decide L
correctly on a $1- (log n)^{-\mu}$ fraction ...
more >>>
An errorless heuristic is an algorithm that on all inputs returns either the correct answer or the special symbol "I don't know." A central question in average-case complexity is whether every distributional decision problem in NP has an errorless heuristic scheme: This is an algorithm that, for every δ > ... more >>>
Hardness amplification is the fundamental task of
converting a $\delta$-hard function $f : {0,1}^n ->
{0,1}$ into a $(1/2-\eps)$-hard function $Amp(f)$,
where $f$ is $\gamma$-hard if small circuits fail to
compute $f$ on at least a $\gamma$ fraction of the
inputs. Typically, $\eps,\delta$ are small (and
$\delta=2^{-k}$ captures the case ...
more >>>
The classical Direct-Product Theorem for circuits says
that if a Boolean function $f:\{0,1\}^n\to\{0,1\}$ is somewhat hard
to compute on average by small circuits, then the corresponding
$k$-wise direct product function
$f^k(x_1,\dots,x_k)=(f(x_1),\dots,f(x_k))$ (where each
$x_i\in\{0,1\}^n$) is significantly harder to compute on average by
slightly smaller ...
more >>>
The question whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols. While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive arguments (e.g., 3-message protocols - Bellare, ... more >>>
We establish new hardness amplification results for one-way functions in which each input bit influences only a small number of output bits (a.k.a. input-local functions). Our transformations differ from previous ones in that they approximately preserve input locality and at the same time retain the input size of the original ... more >>>
Hardness amplification results show that for every function $f$ there exists a function $Amp(f)$ such that the following holds: if every circuit of size $s$ computes $f$ correctly on at most a $1-\delta$ fraction of inputs, then every circuit of size $s'$ computes $Amp(f)$ correctly on at most a $1/2+\eps$ ... more >>>
We establish a generic form of hardness amplification for the approximability of constant-depth Boolean circuits by polynomials. Specifically, we show that if a Boolean circuit cannot be pointwise approximated by low-degree polynomials to within constant error in a certain one-sided sense, then an OR of disjoint copies of that circuit ... more >>>
We prove a parallel repetition theorem for general games with value tending to 0. Previously Dinur and Steurer proved such a theorem for the special case of projection games. We use information theoretic techniques in our proof. Our proofs also extend to the high value regime (value close to 1) ... more >>>
Information complexity is the interactive analogue of Shannon's classical information theory. In recent years this field has emerged as a powerful tool for proving strong communication lower bounds, and for addressing some of the major open problems in communication complexity and circuit complexity. A notable achievement of information complexity is ... more >>>
We investigate the value of parallel repetition of one-round games with any number of players $k\ge 2$. It has been an open question whether an analogue of Raz's Parallel Repetition Theorem holds for games with more than two players, i.e., whether the value of the repeated game decays exponentially ... more >>>
In a seminal work, Williams [Wil14] showed that NEXP (non-deterministic exponential time) does not have polynomial-size ACC^0 circuits. Williams' technique inherently gives a worst-case lower bound, and until now, no average-case version of his result was known.
We show that there is a language L in NEXP (resp. EXP^NP) ... more >>>
Unique Games Conjecture (UGC), proposed by [Khot02], lies in the center of many inapproximability results. At the heart of UGC lies approximability of MAX-CUT, which is a special instance of Unique Game.[KhotKMO04, MosselOO05] showed that assuming Unique Games Conjecture, it is NP-hard to distinguish between MAX-CUT instance that has a ... more >>>
We show that proving mildly super-linear lower bounds on non-commutative arithmetic circuits implies exponential lower bounds on non-commutative circuits. That is, non-commutative circuit complexity is a threshold phenomenon: an apparently weak lower bound actually suffices to show the strongest lower bounds we could desire.
This is part of a recent ... more >>>
Consider a PPT two-party protocol ?=(A,B) in which the parties get no private inputs and obtain outputs O^A,O^B?{0,1}, and let V^A and V^B denote the parties’ individual views. Protocol ? has ?-agreement if Pr[O^A=O^B]=1/2+?. The leakage of ? is the amount of information a party obtains about the event {O^A=O^B}; ... more >>>
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem $\Pi$ is direct product feasible if it is possible to efficiently aggregate any $k$ instances of $\Pi$ and form one large instance ...
more >>>
We exactly characterize the average-case complexity of the polynomial-time hierarchy (PH) by the worst-case (meta-)complexity of GapMINKT(PH), i.e., an approximation version of the problem of determining if a given string can be compressed to a short PH-oracle efficient program. Specifically, we establish the following equivalence:
DistPH is contained in ... more >>>
We consider the question of hardness self-amplification: Given a Boolean function $f$ that is hard to compute on a $o(1)$-fraction of inputs drawn from some distribution, can we prove that $f$ is hard to compute on a $(\frac{1}{2} - o(1))$-fraction of inputs drawn from the same distribution? We prove hardness ... more >>>
Strong (resp. weak) average-case hardness refers to the properties of a computational problem in which a large (resp. small) fraction of instances are hard to solve. We develop a general framework for proving hardness self-amplification, that is, the equivalence between strong and weak average-case hardness. Using this framework, we prove ... more >>>
In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\ldots,M-1\}$, the objective is to find a set of $k$ numbers that sum to $0$ modulo $M$ (this set is called a ``solution''). In the related $k$-XOR problem, given $k$ uniformly random Boolean vectors of length $\log{M}$, ... more >>>
We study hardness amplification in the context of two well-known "moderate" average-case hardness results for $\mathrm{AC}^0$ circuits. First, we investigate the extent to which $\mathrm{AC}^0$ circuits of depth $d$ can approximate $\mathrm{AC}^0$ circuits of some larger depth $d + k$. The case $k = 1$ is resolved by Håstad, Rossman, ... more >>>