Under the auspices of the Computational Complexity Foundation (CCF)

REPORTS > DETAIL:

### Paper:

TR22-113 | 11th August 2022 23:53

#### Leakage-Resilient Hardness v.s. Randomness

TR22-113
Authors: Yanyi Liu, Rafael Pass
Publication: 14th August 2022 11:05
Keywords:

Abstract:

A central open problem in complexity theory concerns the question of
whether all efficient randomized algorithms can be simulated by
efficient deterministic algorithms. The celebrated hardness
v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84),
Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness
assumptions under which $\prBPP = \prP$, but these hardness assumptions
are not known to also be necessary for such derandomization.

In this work, following the recent work by Chen and Tell (FOCS’21) that
considers “almost-all-input” hardness of a function $f$ (i.e., hardness
of computing $f$ on more than a finite number of inputs), we consider
“almost-all-input” \emph{leakage-resilient hardness} of a function $f$
—that is, hardness of computing $f(x)$ even given, say, $\sqrt{|x|}$
bits of leakage of $f(x)$. We show that leakage-resilient hardness
characterizes derandomization of $\prBPP$ (i.e., gives a both \emph{
necessary} and \emph{sufficient} condition for derandomization). In more
detail, we show that there exists a constant $c$ such that the following
are equivalent:

- $\prBPP = \prP$;
- Existence of a poly-time computable function $f :\{0, 1\}^n \rightarrow \{0,1\}^n$
that is almost-all-input leakage-resilient hard with respect to
$n^c$-time probabilistic algorithms.

Our characterization naturally extends also to the low-end
derandomization regime, to derandomization of $\prMA$, and also to
average-case derandomization, by appropriately weakening the
requirements on the function $f$.

ISSN 1433-8092 | Imprint