Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR19-169 | 17th April 2020 09:19

On Exponential-Time Hypotheses, Derandomization, and Circuit Lower Bounds

RSS-Feed




Revision #1
Authors: Lijie Chen, Ron Rothblum, Roei Tell, Eylon Yogev
Accepted on: 17th April 2020 09:19
Downloads: 205
Keywords: 


Abstract:

The Exponential-Time Hypothesis ($ETH$) is a strengthening of the $\mathcal{P} \neq \mathcal{NP}$ conjecture, stating that $3\text{-}SAT$ on $n$ variables cannot be solved in (uniform) time $2^{\epsilon\cdot n}$, for some $\epsilon>0$. In recent years, analogous hypotheses that are ``exponentially-strong'' forms of other classical complexity conjectures (such as $\mathcal{NP}\nsubseteq\mathcal{BPP}$ or $co\mathcal{NP}\nsubseteq \mathcal{NP}$) have also been introduced, and have become widely influential.

In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds. We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that:

1. The Randomized Exponential-Time Hypothesis ($rETH$) implies that $\mathcal{BPP}$ can be simulated on ``average-case'' in deterministic (nearly-)polynomial-time (i.e., in time $2^{\tilde{O}(\log(n))}=n^{\mathrm{loglog}(n)^{O(1)}}$). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i.e., with seed length $\tilde{O}(\log(n))$); this significantly improves the state-of-the-art in uniform ``hardness-to-randomness'' results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses.

2. The Non-Deterministic Exponential-Time Hypothesis ($NETH$) implies that derandomization of $\mathcal{BPP}$ is completely equivalent to circuit lower bounds against $\mathcal{E}$, and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of $NETH$, and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it.

Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if $CircuitSAT$ for circuits over $n$ bits of size $\mathrm{poly}(n)$ can be solved by probabilistic algorithms in time $2^{n/\mathrm{polylog}(n)}$, then $\mathcal{BPE}$ does not have circuits of quasilinear size.



Changes to previous version:

Significantly revised the exposition, and fixed a few bugs in Section 5.


Paper:

TR19-169 | 21st November 2019 20:45

On Exponential-Time Hypotheses, Derandomization, and Circuit Lower Bounds





TR19-169
Authors: Lijie Chen, Ron Rothblum, Roei Tell, Eylon Yogev
Publication: 24th November 2019 13:50
Downloads: 619
Keywords: 


Abstract:

The Exponential-Time Hypothesis ($ETH$) is a strengthening of the $\mathcal{P} \neq \mathcal{NP}$ conjecture, stating that $3\text{-}SAT$ on $n$ variables cannot be solved in time $2^{\epsilon\cdot n}$, for some $\epsilon>0$. In recent years, analogous hypotheses that are ``exponentially-strong'' forms of other classical complexity conjectures (such as $\mathcal{NP}\not\subseteq\mathcal{BPP}$ or $co\text{-}\mathcal{NP}\not\subseteq \mathcal{NP}$) have also been considered. These Exponential-Time Hypotheses have been widely influential across different areas of complexity theory. However, their connections to *derandomization and circuit lower bounds* have yet to be systematically studied. Such study is indeed the focus of the current work, and we prove a sequence of results demonstrating that *the connections between exponential-time hypotheses, derandomization, and circuit lower bounds are remarkably strong*.

First, we show that if $3\text{-}SAT$ (or even $TQBF$) cannot be solved by probabilistic algorithms that run in time $2^{n/\mathrm{polylog}(n)}$, then $\mathcal{BPP}$ can be deterministically simulated ``on average case'' in (nearly-)polynomial-time (i.e., in time $n^{\mathrm{polyloglog}(n)}$). This result addresses a long-standing lacuna in uniform ``hardness-to-randomness'' results, which did not previously extend to such parameter settings. Moreover, we extend this result to support an ``almost-always'' derandomization conclusion from an ``almost-always'' lower bound hypothesis.

Secondly, we show that *disproving* certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if $CircuitSAT$ for circuits over $n$ bits of size $\mathrm{poly}(n)$ can be solved by *probabilistic algorithms* in time $2^{n/\mathrm{polylog}(n)}$, then $\mathcal{BPE}$ does not have circuits of quasilinear size. The main novel feature of this result is that we only assume the existence of a *randomized* circuit-analysis algorithm, whereas previous similar results crucially relied on the hypothesis that the circuit-analysis algorithm does not use randomness.

Thirdly, we show that a very weak exponential-time hypothesis is closely-related to the classical question of whether derandomization and circuit lower bounds are *equivalent*. Specifically, we show two-way implications between the hypothesis that the foregoing equivalence holds and the hypothesis that $\mathcal{E}$ cannot be decided by ``small'' circuits that are *uniformly generated* by relatively-efficient non-deterministic machines. This highlights a sufficient-and-necessary path for progress towards proving that derandomization and circuit lower bounds are indeed equivalent.



ISSN 1433-8092 | Imprint