Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR21-153 | 20th November 2023 12:31

On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

RSS-Feed




Revision #1
Authors: Ronen Shaltiel, Emanuele Viola
Accepted on: 20th November 2023 12:31
Downloads: 31
Keywords: 


Abstract:

The hardness vs.~randomness paradigm aims to explicitly construct pseudorandom generators $G:\{0,1\}^r \to \{0,1\}^m$ that fool circuits of size $m$, assuming the existence of explicit hard functions. A ``high-end PRG'' with seed length $r=O(\log m)$ (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming \textsc{the high-end hardness assumption}: there exist constants $0<\beta < 1< B$, and functions computable in time $2^{B \cdot n}$ that cannot be computed by circuits of size $2^{\beta \cdot n}$.

Recently, motivated by fast derandomization of randomized algorithms, Doron et al.~(FOCS 2020) and Chen and Tell (STOC 2021), construct ``extreme high-end PRGs'' with seed length $r=(1+o(1))\cdot \log m$, under qualitatively stronger assumptions.

We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which $\beta=1-o(1)$ and $B=1+o(1)$, which we call \textsc{the extreme high-end hardness assumption}. We give a partial negative answer:

\begin{itemize}
\item The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from \textsc{the extreme high-end hardness assumption} must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).

To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.

\item The construction of Chen and Tell composes two PRGs: $G_1:\{0,1\}^{(1+o(1)) \cdot \log m} \to \{0,1\}^{r_2=m^{\Omega(1)}}$ and $G_2:\{0,1\}^{r_2} \to \{0,1\}^m$. The first PRG is constructed from \textsc{the extreme high-end hardness assumption}, and the second PRG needs to run in time $m^{1+o(1)}$, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to $\frac{1}{2}+1/m$, reductions must make $\Omega(m)$ queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG $G_2$ from \textsc{the extreme high-end hardness assumption}.

The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.
\end{itemize}


Paper:

TR21-153 | 9th November 2021 13:07

On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization





TR21-153
Authors: Ronen Shaltiel, Emanuele Viola
Publication: 9th November 2021 13:08
Downloads: 308
Keywords: 


Abstract:

The hardness vs.~randomness paradigm aims to explicitly construct pseudorandom generators $G:\{0,1\}^r \to \{0,1\}^m$ that fool circuits of size $m$, assuming the existence of explicit hard functions. A ``high-end PRG'' with seed length $r=O(\log m)$ (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming \textsc{the high-end hardness assumption}: there exist constants $0<\beta < 1< B$, and functions computable in time $2^{B \cdot n}$ that cannot be computed by circuits of size $2^{\beta \cdot n}$.

Recently, motivated by fast derandomization of randomized algorithms, Doron et al.~(FOCS 2020) and Chen and Tell (STOC 2021), construct ``extreme high-end PRGs'' with seed length $r=(1+o(1))\cdot \log m$, under qualitatively stronger assumptions.

We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which $\beta=1-o(1)$ and $B=1+o(1)$, which we call \textsc{the extreme high-end hardness assumption}. We give a partial negative answer:

\begin{itemize}
\item The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from \textsc{the extreme high-end hardness assumption} must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor).

To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions.

\item The construction of Chen and Tell composes two PRGs: $G_1:\{0,1\}^{(1+o(1)) \cdot \log m} \to \{0,1\}^{r_2=m^{\Omega(1)}}$ and $G_2:\{0,1\}^{r_2} \to \{0,1\}^m$. The first PRG is constructed from \textsc{the extreme high-end hardness assumption}, and the second PRG needs to run in time $m^{1+o(1)}$, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to $\frac{1}{2}+1/m$, reductions must make $\Omega(m)$ queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG $G_2$ from \textsc{the extreme high-end hardness assumption}.

The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.
\end{itemize}



ISSN 1433-8092 | Imprint