Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR22-121 | 30th September 2022 19:58

Recent Progress on Derandomizing Space-Bounded Computation

RSS-Feed




Revision #1
Authors: William Hoza
Accepted on: 30th September 2022 19:58
Downloads: 931
Keywords: 


Abstract:

Is randomness ever necessary for space-efficient computation? It is commonly conjectured that L = BPL, meaning that halting decision algorithms can always be derandomized without increasing their space complexity by more than a constant factor. In the past few years (say, from 2017 to 2022), there has been some exciting progress toward proving this conjecture. Thanks to recent work, we have new pseudorandom generators (PRGs), new black-box derandomization algorithms (generalizations of PRGs), and new non-black-box derandomization algorithms. This article is a survey of these recent developments. We organize the underlying techniques into four overlapping themes:

1. The *iterated pseudorandom restrictions* framework for designing PRGs, especially PRGs for functions computable by arbitrary-order read-once branching programs.

2. The *inverse Laplacian* perspective on derandomizing BPL and the related concept of local consistency.

3. *Error reduction* procedures, including methods of designing low-error weighted pseudorandom generators (WPRGs).

4. The continued use of *spectral expander graphs* in this domain via the derandomized square operation and the Impagliazzo-Nisan-Wigderson PRG (STOC 1994).

We give an overview of these ideas and their applications, and we discuss the challenges ahead.



Changes to previous version:

Minor improvements to the presentation


Paper:

TR22-121 | 27th August 2022 00:12

Recent Progress on Derandomizing Space-Bounded Computation





TR22-121
Authors: William Hoza
Publication: 28th August 2022 05:52
Downloads: 793
Keywords: 


Abstract:

Is randomness ever necessary for space-efficient computation? It is commonly conjectured that L = BPL, meaning that halting decision algorithms can always be derandomized without increasing their space complexity by more than a constant factor. In the past few years (say, from 2017 to 2022), there has been some exciting progress toward proving this conjecture. Thanks to recent work, we have new pseudorandom generators (PRGs), new black-box derandomization algorithms (generalizations of PRGs), and new non-black-box derandomization algorithms. This article is a survey of these recent developments. We organize the underlying techniques into four overlapping themes:

1. The *iterated pseudorandom restrictions* framework for designing PRGs, especially PRGs for functions computable by arbitrary-order read-once branching programs.

2. The *inverse Laplacian* perspective on derandomizing BPL and the related concept of local consistency.

3. *Error reduction* procedures, including methods of designing low-error weighted pseudorandom generators (WPRGs).

4. The continued use of *spectral expander graphs* in this domain via the derandomized square operation and the Impagliazzo-Nisan-Wigderson PRG (STOC 1994).

We give an overview of these ideas and their applications, and we discuss the challenges ahead.



ISSN 1433-8092 | Imprint