Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #2 to TR22-113 | 16th May 2023 00:40

Leakage-Resilient Hardness v.s. Randomness

RSS-Feed




Revision #2
Authors: Yanyi Liu, Rafael Pass
Accepted on: 16th May 2023 00:40
Downloads: 404
Keywords: 


Abstract:

A central open problem in complexity theory concerns the question of
whether all efficient randomized algorithms can be simulated by
efficient deterministic algorithms. The celebrated ``hardness
v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84),
Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness
assumptions under which e.g., $\prBPP = \prP$ (so-called ``high-end
derandomization), or $\prBPP \subseteq \prSUBEXP$ (so-called
``low-end derandomization), and more generally, under which
$\prBPP \subseteq \prDTIME(\C)$ where $\C$ is a ``nice'' class
(closed under composition with a polynomial), but these hardness
assumptions are not known to also be necessary for such derandomization.

In this work, following the recent work by Chen and Tell (FOCS’21)
that considers “almost-all-input” hardness of a function $f$ (i.e.,
hardness of computing $f$ on more than a finite number of inputs), we
consider “almost-all-input” \emph{leakage-resilient hardness} of a
function $f$—that is, hardness of computing $f(x)$ even given, say,
$\sqrt{|x|}$ bits of leakage of $f(x)$. We show that leakage-resilient
hardness characterizes derandomization of $\prBPP$ (i.e., gives a both
\emph{necessary} and \emph{sufficient} condition for derandomization),
both in the high-end and in the low-end setting.

In more detail, we show that there exists a constant $c$ such that for
every function $T$, the following are equivalent:
- $\prBPP \subseteq \prDTIME(\poly(T(\poly(n))))$;
- Existence of a $\poly(T(\poly(n)))$-time computable function $f :\{0,
1\}^n \rightarrow \{0,1\}^n$ that is almost-all-input leakage-resilient
hard with respect to $n^c$-time probabilistic algorithms.
As far as we know, this is the first assumption that
characterizes derandomization in both the low-end and the high-end regime.

Additionally, our characterization naturally extends also
to derandomization of $\prMA$, and also to
average-case derandomization, by appropriately weakening the
requirements on the function $f$. In particular, for the case of
average-case (a.k.a. ``effective'') derandomization, we longer require
the function to be almost-all-input hard, but simply satisfy the more
standard notion of average-case leakage-resilient hardness (w.r.t.,
every samplable distribution), whereas for derandomization of $\prMA$,
we instead consider leakage-resilience for relations.


Revision #1 to TR22-113 | 10th February 2023 22:54

Leakage-Resilient Hardness v.s. Randomness





Revision #1
Authors: Yanyi Liu, Rafael Pass
Accepted on: 10th February 2023 22:54
Downloads: 277
Keywords: 


Abstract:

A central open problem in complexity theory concerns the question of
whether all efficient randomized algorithms can be simulated by
efficient deterministic algorithms. The celebrated ``hardness
v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84),
Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness
assumptions under which e.g., $\prBPP = \prP$ (so-called ``high-end derandomization), or $\prBPP \subseteq
\prSUBEXP$ (so-called ``low-end derandomization), and more generally, under which $\prBPP \subseteq
\prDTIME(\C)$ where $\C$ is a ``nice'' class (closed under
composition with a polynomial), but these hardness assumptions are not known to also be necessary for such derandomization.

In this work, following the recent work by Chen and Tell (FOCS’21)
that considers “almost-all-input” hardness of a function $f$ (i.e.,
hardness of computing $f$ on more than a finite number of inputs), we
consider “almost-all-input” \emph{leakage-resilient hardness} of a
function $f$—that is, hardness of computing $f(x)$ even given, say,
$\sqrt{|x|}$ bits of leakage of $f(x)$. We show that leakage-resilient
hardness characterizes derandomization of $\prBPP$ (i.e., gives a both
\emph{necessary} and \emph{sufficient} condition for derandomization),
both in the high-end and in the low-end setting.

In more detail, we show that there exists a constant $c$ such that for
every function $T$, the following are equivalent:
- $\prBPP \subseteq \prDTIME(\poly(T(\poly(n))))$;
- Existence of a $\poly(T(\poly(n)))$-time computable function $f :\{0, 1\}^n \rightarrow \{0,1\}^n$ that is almost-all-input leakage-resilient hard with respect to $n^c$-time probabilistic algorithms.
As far as we know, this is the first assumption that
characterizes derandomization in both the low-end and the high-end regime.

Additionally, our characterization naturally extends also
to derandomization of $\prMA$, and also to
average-case derandomization, by appropriately weakening the
requirements on the function $f$. In particular, for the case of
average-case (a.k.a. ``effective'') derandomization, we longer require
the function to be almost-all-input hard, but simply satisfy the more
standard notion of average-case leakage-resilient hardness (w.r.t.,
every samplable distribution), whereas for derandomization of $\prMA$,
we instead consider leakage-resilience for relations.


Paper:

TR22-113 | 11th August 2022 23:53

Leakage-Resilient Hardness v.s. Randomness





TR22-113
Authors: Yanyi Liu, Rafael Pass
Publication: 14th August 2022 11:05
Downloads: 789
Keywords: 


Abstract:

A central open problem in complexity theory concerns the question of
whether all efficient randomized algorithms can be simulated by
efficient deterministic algorithms. The celebrated ``hardness
v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84),
Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness
assumptions under which $\prBPP = \prP$, but these hardness assumptions
are not known to also be necessary for such derandomization.

In this work, following the recent work by Chen and Tell (FOCS’21) that
considers “almost-all-input” hardness of a function $f$ (i.e., hardness
of computing $f$ on more than a finite number of inputs), we consider
“almost-all-input” \emph{leakage-resilient hardness} of a function $f$
—that is, hardness of computing $f(x)$ even given, say, $\sqrt{|x|}$
bits of leakage of $f(x)$. We show that leakage-resilient hardness
characterizes derandomization of $\prBPP$ (i.e., gives a both \emph{
necessary} and \emph{sufficient} condition for derandomization). In more
detail, we show that there exists a constant $c$ such that the following
are equivalent:

- $\prBPP = \prP$;
- Existence of a poly-time computable function $f :\{0, 1\}^n \rightarrow \{0,1\}^n$
that is almost-all-input leakage-resilient hard with respect to
$n^c$-time probabilistic algorithms.

Our characterization naturally extends also to the low-end
derandomization regime, to derandomization of $\prMA$, and also to
average-case derandomization, by appropriately weakening the
requirements on the function $f$.



ISSN 1433-8092 | Imprint