Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR26-045 | 30th March 2026 04:11

Using Hardness vs Randomness to Design Low-Space Algorithms

RSS-Feed




TR26-045
Authors: Edward Pyne, Roei Tell
Publication: 5th April 2026 11:49
Downloads: 50
Keywords: 


Abstract:

Can we use ``hardness vs randomness'' techniques to design low-space algorithms? This text surveys a sequence of recent works showing ways to do that.
These works designed algorithms for certified derandomization and for catalytic computation (which work unconditionally), derandomization and isolation algorithms from remarkably mild assumptions, and ``win-win'' pairs of algorithms that, for every input, solve either derandomization or another important problem (e.g., $s$-$t$ connectivity) on the input.

Underlying these constructions are new, specialized ``hardness vs randomness'' tools for the setting of low-space algorithms.
We describe these technical tools, most notably constructions of pseudorandom generators whose reconstruction algorithm (i.e., the security reduction) is a deterministic low-space algorithm. We also explain a key part of obtaining deterministic reconstruction, which is deterministic transformations of distinguishers to bit-predictors.

We pose a host of open questions that explore new ways of using hardness vs randomness to design low-space algorithms. These questions address problems in derandomization, catalytic computation, explicit constructions, learning algorithms, and more.
This survey appeared in Issue 148 of the Bulletin of the European Association for Theoretical Computer Science.



ISSN 1433-8092 | Imprint