Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR16-037 | 15th March 2016 10:41

Pseudorandomness when the odds are against you

RSS-Feed




TR16-037
Authors: Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, Ronen Shaltiel
Publication: 15th March 2016 10:41
Downloads: 1535
Keywords: 


Abstract:

Impagliazzo and Wigderson showed that if $\text{E}=\text{DTIME}(2^{O(n)})$ requires size $2^{\Omega(n)}$ circuits, then
every time $T$ constant-error randomized algorithm can be simulated deterministically in time $\poly(T)$. However, such polynomial slowdown is a deal breaker when $T=2^{\alpha \cdot n}$, for a constant $\alpha>0$, as is the case for some randomized algorithms for NP-complete problems.

Paturi and Pudlak \cite{PP} observed that many such algorithms are obtained from randomized time $T$ algorithms, for $T\leq 2^{o(n)}$, with large one-sided error $1-\eps$, for $\eps=2^{-\alpha \cdot n}$, that are repeated $1/\eps$ times to yield a constant-error randomized algorithm running in time $T/\eps=2^{(\alpha+o(1)) \cdot n}$.

We show that if E requires size $2^{\Omega(n)}$ nondeterministic circuits, then there is a $\poly(n)$-time $\eps$-HSG (Hitting-Set Generator) $H\colon\bits^{O(\log n) + \log(1/\eps)} \ar \bits^n$,
%(so that every linear-size Boolean circuit $C$ with $\Pr_{x\in\bits^n}[C(x)=1]\geq \eps$ will accept input $H(s)$ for at least one seed $s\in\bits^r$).
implying that time $T$ randomized algorithms with one-sided error $1-\eps$ can be simulated in deterministic time $\poly(T)/\eps$. In particular,
under this hardness assumption, the fastest known constant-error randomized algorithm for $k$-SAT (for $k\ge 4$) by Paturi et al.~\cite{PPSZ} can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff for algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error imply hardness for nondeterministic circuits with ``few'' nondeterministic bits.

Applebaum et al. showed that ``black-box techniques'' cannot achieve $\poly(n)$-time computable $\eps$-PRGs (Pseudo-Random Generators) for $\eps=n^{-\omega(1)}$, even if we assume hardness against circuits with oracle access to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with \emph{relative error}, that do follow under the latter hardness assumption. Specifically, we say that a function $G:\bits^r \ar \bits^n$ is an $(\eps,\delta)$-re-PRG for a circuit $C$ if
\(
(1-\eps) \cdot \Pr[C(U_n)=1] - \delta \le \Pr[C(G(U_r)=1] \le (1+\eps) \cdot \Pr[C(U_n)=1] + \delta.
\)
We construct $\poly(n)$-time computable $(\eps,\delta)$-re-PRGs with arbitrary polynomial stretch, $\eps=n^{-O(1)}$ and $\delta=2^{-n^{\Omega(1)}}$. We also construct PRGs with relative error that fool \emph{non-boolean distinguishers} (in the sense introduced by Dubrov and Ishai).

Our techniques use ideas from \cite{PP,TV,AASY15}. Common themes in our proofs are ``composing'' a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund \cite{FeigeLund}.



ISSN 1433-8092 | Imprint