The hardness vs. randomness paradigm converts a function $f \colon \{0,1\}^n \rightarrow \{0,1\}$ that is hard for circuits of size $s$ into a pseudorandom generator (PRG) $G \colon \{0,1\}^d \to \{0,1\}^{s'}$ that fools circuits of size $s' = s'(s)$. In the application for derandomization, such as proofs of $\mathbf{BPP} = \mathbf{P}$, the overhead of using such a PRG directly depends on two quantities: the runtime of $G$ and its seed length $d$. Ideally, the runtime of a PRG fooling circuits of size $s'$ should only be slightly larger than $s'$, and $d$ should be $\log s'$. A central challenge in optimizing these parameters is optimizing the relationship between $s$ and $s'$. Ideally, $s'$ should only be slightly less than $s$.
Impagliazzo and Wigderson [IW97] constructed PRGs where $s$ is a large polynomial in $s'$ and $d=c\cdot \log s$ for a large constant $c\ge 8$. This gives derandomization with a large polynomial slowdown.
Doron, Moshkovitz, Oh and Zuckerman [DMOZ22] constructed near optimal PRGs where $s'=s^{1-\alpha}$ and $d=(1+\alpha)\log s$ for arbitrarily small $\alpha>0$. However, they require that $f$ is hard against nondeterministic randomized circuits rather than deterministic circuits.
Subsequent works of Chen, and Tell [CT21, CT24], later joined by Ball [BCT25], provide fast derandomization under various new assumptions. However, no result recovers the nearly optimal PRG of [DMOZ22] under the standard deterministic hardness assumption of [IW97]. In fact, from such an assumption, even constructing a nearly optimal hitting set or pseudoentropy generator against algorithms that err rarely is open. A key bottleneck in obtaining such a result is the use of the hybrid argument, which is known to require a large polynomial loss from $s$ to $s'$.
We identify a property of randomized algorithms that we call ``insensitivity'', which allows us to bypass the hybrid argument. Insensitivity is a simple property we identify that many randomized algorithms satisfy: flipping a uniformly random bit in a randomness string on which the algorithm is correct is likely to yield yet another randomness string on which the algorithm is correct.
We construct hitting sets and pseudoentropy generators for insensitive algorithms with better parameters than the hybrid argument can offer. Additionally, we consider a two-sided variant of insensitivity.
In this variant, flipping a uniformly random bit in a randomness string for which the algorithm is incorrect either always yields another incorrect string, or yields a correct string with decent probability. For such algorithms we construct hitting sets and pseudoentropy generators with near optimal parameters.
Fixed a typesetting error.
The hardness vs. randomness paradigm converts a function $f \colon \{0,1\}^n \rightarrow \{0,1\}$ that is hard for circuits of size $s$ into a pseudorandom generator (PRG) $G \colon \{0,1\}^d \to \{0,1\}^{s'}$ that fools circuits of size $s' = s'(s)$. In the application for derandomization, such as proofs of $\mathbf{BPP} = \mathbf{P}$, the overhead of using such a PRG directly depends on two quantities: the runtime of $G$ and its seed length $d$. Ideally, the runtime of a PRG fooling circuits of size $s'$ should only be slightly larger than $s'$, and $d$ should be $\log s'$. A central challenge in optimizing these parameters is optimizing the relationship between $s$ and $s'$. Ideally, $s'$ should only be slightly less than $s$.
Impagliazzo and Wigderson [IW97] constructed PRGs where $s$ is a large polynomial in $s'$ and $d=c\cdot \log s$ for a large constant $c\ge 8$. This gives derandomization with a large polynomial slowdown.
Doron, Moshkovitz, Oh and Zuckerman [DMOZ22] constructed near optimal PRGs where $s'=s^{1-\alpha}$ and $d=(1+\alpha)\log s$ for arbitrarily small $\alpha>0$. However, they require that $f$ is hard against nondeterministic randomized circuits rather than deterministic circuits.
Subsequent works of Chen, and Tell [CT21, CT24], later joined by Ball [BCT25], provide fast derandomization under various new assumptions. However, no result recovers the nearly optimal PRG of [DMOZ22] under the standard deterministic hardness assumption of [IW97]. In fact, from such an assumption, even constructing a nearly optimal hitting set or pseudoentropy generator against algorithms that err rarely is open. A key bottleneck in obtaining such a result is the use of the hybrid argument, which is known to require a large polynomial loss from $s$ to $s'$.
We identify a property of randomized algorithms that we call ``insensitivity'', which allows us to bypass the hybrid argument. Insensitivity is a simple property we identify that many randomized algorithms satisfy: flipping a uniformly random bit in a randomness string on which the algorithm is correct is likely to yield yet another randomness string on which the algorithm is correct.
We construct hitting sets and pseudoentropy generators for insensitive algorithms with better parameters than the hybrid argument can offer. Additionally, we consider a two-sided variant of insensitivity.
In this variant, flipping a uniformly random bit in a randomness string for which the algorithm is incorrect either always yields another incorrect string, or yields a correct string with decent probability. For such algorithms we construct hitting sets and pseudoentropy generators with near optimal parameters.