Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR21-080 | 10th June 2021 10:18

Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise

RSS-Feed




TR21-080
Authors: Lijie Chen, Roei Tell
Publication: 10th June 2021 10:46
Downloads: 2893
Keywords: 


Abstract:

We propose a new approach to the hardness-to-randomness framework and to the promise-BPP=promise-P conjecture. Classical results rely on non-uniform hardness assumptions to construct derandomization algorithms that work in the worst-case, or rely on uniform hardness assumptions to construct derandomization algorithms that work only in the average-case. In both types of results, the derandomization algorithm is ``black-box'' and uses the standard PRG approach. In this work we present results that closely relate **new and natural uniform hardness assumptions** to **worst-case derandomization** of promise-BPP, where the algorithms underlying the latter derandomization are **non-black-box**.

In our main result, we show that promise-BPP=promise-P if the following holds: There exists a multi-output function computable by logspace-uniform circuits of polynomial size and depth $n^2$ that cannot be computed by uniform probabilistic algorithms in time $n^c$, for some universal constant $c>1$, on **almost all inputs**. The required failure on ``almost all inputs'' is stronger than the standard requirement of failing on one input of each length; however, the same assumption without the depth restriction on $f$ is **necessary** for the conclusion. This suggests a potential equivalence between worst-case derandomization of promise-BPP of any form (i.e., not necessarily by a black-box algorithm) and the existence of efficiently computable functions that are hard for probabilistic algorithms on almost all inputs.

In our second result, we introduce a new and uniform hardness-to-randomness tradeoff for the setting of **superfast average-case derandomization**; prior to this work, superfast average-case derandomization was known only under non-uniform hardness assumptions. In an extreme instantiation of our new tradeoff, under appealing uniform hardness assumptions, we show that for every polynomial $T(n)$ and constant $\epsilon>0$ it holds that $BPTIME[T]\subseteq heur-DTIME[T\cdot n^{\epsilon}]$, where the ``heur'' prefix means that no polynomial-time algorithm can find, with non-negligible probability, an input on which the deterministic simulation errs.

Technically, our approach is to design **targeted PRGs and HSGs**, as introduced by Goldreich (LNCS, 2011). The targeted PRGs/HSGs ``produce randomness from the input'', as suggested by Goldreich and Wigderson (RANDOM 2002); and their analysis relies on non-black-box versions of the reconstruction procedure of Impagliazzo and Wigderson (FOCS 1998). Our main reconstruction procedure crucially relies on the ideas underlying the proof system of Goldwasser, Kalai, and Rothblum (J. ACM 2015).



ISSN 1433-8092 | Imprint