Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR20-148 | 11th April 2021 17:52

Simple and fast derandomization from very hard functions: Eliminating randomness at almost no cost

RSS-Feed




Revision #1
Authors: Lijie Chen, Roei Tell
Accepted on: 11th April 2021 17:52
Downloads: 912
Keywords: 


Abstract:

Extending the classical ``hardness-to-randomness'' line-of-works, Doron, Moshkovitz, Oh, and Zuckerman (FOCS 2020) recently proved that derandomization with near-quadratic time overhead is possible, under the assumption that there exists a function in $\mathcal{DTIME}[2^n]$ that cannot be computed by randomized SVN circuits of size $2^{(1-\epsilon)\cdot n}$ for a small $\epsilon$.

In this work we extend their inquiry and answer several open questions that arose from their work. For a time function $T(n)$, consider the following assumption: Non-uniformly secure one-way functions exist, and for $\delta=\delta(\epsilon)$ and $k=k_T(\epsilon)$ there exists a problem in $\mathcal{DTIME}[2^{k\cdot n}]$ that is hard for algorithms that run in time $2^{(k-\delta)\cdot n}$ and use $2^{(1-\delta)\cdot n}$ bits of advice. Under this assumption, we show that:

1. (Worst-case derandomization.) Probabilistic algorithms that run in time $T(n)$ can be deterministically simulated in time $n\cdot T(n)^{1+\epsilon}$.

2. (Average-case derandomization.) For polynomial time functions $T(n)=\mathrm{poly}(n)$, we can improve the derandomization time to $n^{\epsilon}\cdot T(n)$ if we allow the derandomization to succeed only on average, rather than in the worst-case.

3. (Conditional optimality.) For worst-case derandomization, the multiplicative time overhead of $n$ is essentially optimal, conditioned on a counting version of the non-deterministic strong exponential-time hypothesis (i.e., on $\#\mathtt{NSETH}$).

Lastly, we present an alternative proof for the result of Doron, Moshkovitz, Oh, and Zuckerman that is simpler and more versatile. In fact, we show how to simplify the analysis not only of their construction, but of any construction that ``extracts randomness from a pseudoentropic string''.



Changes to previous version:

Improving the exposition and incorporating comments by reviewers.


Paper:

TR20-148 | 28th September 2020 03:00

Simple and fast derandomization from very hard functions: Eliminating randomness at almost no cost





TR20-148
Authors: Lijie Chen, Roei Tell
Publication: 29th September 2020 20:09
Downloads: 1383
Keywords: 


Abstract:

Extending the classical ``hardness-to-randomness'' line-of-works, Doron et al. (FOCS 2020) recently proved that derandomization with near-quadratic time overhead is possible, under the assumption that there exists a function in $\mathcal{DTIME}[2^n]$ that cannot be computed by randomized SVN circuits of size $2^{(1-\epsilon)\cdot n}$ for a small $\epsilon$.

In this work we extend their inquiry and answer several open questions that arose from their work. Our main result is that *derandomization with almost no time overhead is possible*, under a plausible hypothesis. Specifically, we show that probabilistic algorithms that run in time $T(n)$ can be deterministically simulated in time $n\cdot T(n)^{1+\epsilon}$, under a hypothesis that is formally incomparable to the one of Doron et al., but is arguably more standard: We assume that there exist non-uniformly secure one-way functions, and that for $\delta=\delta(\epsilon)$ and $k=k_T(\epsilon)$ there exists a problem in $\mathcal{DTIME}[2^{k\cdot n}]$ that is hard for algorithms that run in time $2^{(k-\delta)\cdot n}$ and use $2^{(1-\delta)\cdot n}$ bits of advice. We also show that the latter hypothesis (or, more accurately, a relaxation of it that we use) is in fact necessary to obtain the derandomization conclusion if one relies on a PRG construction (as is indeed our approach).

For sub-exponential time functions $T(n)=2^{n^{o(1)}}$ we further improve the derandomization time to $n^{1+\epsilon}\cdot T(n)$, under a mildly stronger hypothesis. We also show that *the multiplicative time overhead of $n$ is essentially optimal*, conditioned on a counting version of the non-deterministic strong exponential-time hypothesis (i.e., on $\# NSETH$). Nevertheless, we show that *in the average-case setting a faster derandomization is possible*: Under hypotheses similar to the ones in our main result, we show that for every $L\in\mathcal{BPTIME}[n^k]$ there exists a deterministic algorithm $A_L$ running in time $n^{\epsilon}\cdot n^{k}$ such that for every distribution $\mathcal{D}$ over $\{0,1\}^n$ samplable in time $n^k$ it holds that $\Pr_{x\sim\mathcal{D}}[A_L(x)=L(x)]\ge1-n^{-\omega(1)}$.

Lastly, we present an alternative proof for the result of Doron et al. using a *proof paradigm that is both considerably simpler and more general*; in fact, we show how to simplify the analysis of any construction that ``extracts randomness from a pseudoentropic string''. We use this simpler proof to extend their result, deducing a mildly slower derandomization (i.e., with cubic or quadratic overhead) from weaker hardness assumptions (i.e., for SVN circuits that do not use randomness).



ISSN 1433-8092 | Imprint