Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR24-139 | 11th September 2024 14:46

Distinguishing, Predicting, and Certifying: On the Long Reach of Partial Notions of Pseudorandomness

RSS-Feed




TR24-139
Authors: Jiatu Li, Edward Pyne, Roei Tell
Publication: 11th September 2024 20:57
Downloads: 217
Keywords: 


Abstract:

This paper revisits the study of two classical technical tools in theoretical computer science: Yao's transformation of distinguishers to next-bit predictors (FOCS 1982), and the ``reconstruction paradigm'' in pseudorandomness (e.g., as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of derandomizing them in more general settings.

Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically:

\begin{itemize}
\item We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish-to-predict transformations is equivalent to prBPP$=$prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP$=$prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time.

\item Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL$=$UL; and proofs that derandomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally).
\end{itemize}

In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called ``Lossy Code''. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations.

Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool.



ISSN 1433-8092 | Imprint