A classical challenge in complexity theory and cryptography is to simulate interactive proof systems by non-interactive proof systems. In this work we leverage approaches from recent works in derandomization to address this challenge, focusing on non-interactive simulations that are sound against uniform adversarial algorithms.
Our results concern fundamental questions in ... more >>>
We show that there is a constant $k$ such that Buss's intuitionistic theory $\mathbf{IS}^1_2$ does not prove that SAT requires co-nondeterministic circuits of size at least $n^k$. To our knowledge, this is the first unconditional unprovability result in bounded arithmetic in the context of worst-case fixed-polynomial size circuit lower bounds. ... more >>>
Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are necessary to prove a given theorem. In this work, we systematically explore the reverse mathematics of complexity lower bounds. We explore reversals in the setting of bounded arithmetic, with Cook's theory $\mathbf{PV}_1$ as the base ... more >>>
We show that there is a language in $\mathrm{S}_2\mathrm{E}/_1$ (symmetric exponential time with one bit of advice) with circuit complexity at least $2^n/n$. In particular, the above also implies the same near-maximum circuit lower bounds for the classes $\Sigma_2\mathrm{E}$, $(\Sigma_2\mathrm{E}\cap\Pi_2\mathrm{E})/_1$, and $\mathrm{ZPE}^{\mathrm{NP}}/_1$. Previously, only "half-exponential" circuit lower bounds for these ... more >>>
A weighted pseudorandom generator (WPRG) is a generalization of a pseudorandom generator (PRG) in which, roughly speaking, probabilities are replaced with weights that are permitted to be positive or negative. We present new explicit constructions of WPRGs that fool certain classes of standard-order read-once branching programs. In particular, our WPRGs ... more >>>
We establish an equivalence between two algorithmic tasks: *derandomization*, the deterministic simulation of probabilistic algorithms; and *refutation*, the deterministic construction of inputs on which a given probabilistic algorithm fails to compute a certain hard function.
We prove that refuting low-space probabilistic streaming algorithms that try to compute functions $f\in\mathcal{FP}$ ... more >>>
What's new in the world of derandomization? Questions about pseudorandomness and derandomization have been driving progress in complexity theory for many decades. In this survey we will describe new approaches to the $\mathcal{BPP}=\mathcal{P}$ conjecture from recent years, as well as new questions, algorithmic approaches, and ways of thinking. For example: ... more >>>
A randomized algorithm for a search problem is *pseudodeterministic* if it produces a fixed canonical solution to the search problem with high probability. In their seminal work on the topic, Gat and Goldwasser posed as their main open problem whether prime numbers can be pseudodeterministically constructed in polynomial time.
... more >>>In this paper, we obtain several new results on lower bounds and derandomization for ACC^0 circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity):
1. We prove that any polynomial-time Merlin-Arthur proof system with an ACC^0 verifier (denoted by ...
more >>>
We consider the Max-Cut problem, asking how much space is needed by a streaming algorithm in order to estimate the value of the maximum cut in a graph. This problem has been extensively studied over the last decade, and we now have a near-optimal lower bound for one-pass streaming algorithms, ... more >>>
The leading technical approach in uniform hardness-to-randomness in the last two decades faced several well-known barriers that caused results to rely on overly strong hardness assumptions, and yet still yield suboptimal conclusions.
In this work we show uniform hardness-to-randomness results that *simultaneously break through all of the known barriers*. Specifically, ... more >>>
In a recent work, Fan, Li, and Yang (STOC 2022) constructed a family of almost-universal hash functions such that each function in the family is computable by $(2n + o(n))$-gate circuits of fan-in $2$ over the $B_2$ basis. Applying this family, they established the existence of pseudorandom functions computable by ... more >>>
What is the actual cost of derandomization? And can we get it for free? These questions were recently raised by Doron et. al (FOCS 2020) and have been attracting considerable interest. In this work we extend the study of these questions to the setting of *derandomizing interactive proofs systems*.
... more >>>What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness ... more >>>
In a Merlin-Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability $1$, and rejects invalid proofs with probability arbitrarily close to $1$. The running time of such a system is defined to be the length of Merlin's proof plus the running time of Arthur. We ... more >>>
For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences ... more >>>
We propose a new approach to the hardness-to-randomness framework and to the promise-BPP=promise-P conjecture. Classical results rely on non-uniform hardness assumptions to construct derandomization algorithms that work in the worst-case, or rely on uniform hardness assumptions to construct derandomization algorithms that work only in the average-case. In both types of ... more >>>
We develop a general framework that characterizes strong average-case lower bounds against circuit classes $\mathcal{C}$ contained in $\mathrm{NC}^1$, such as $\mathrm{AC}^0[\oplus]$ and $\mathrm{ACC}^0$. We apply this framework to show:
- Generic seed reduction: Pseudorandom generators (PRGs) against $\mathcal{C}$ of seed length $\leq n -1$ and error $\varepsilon(n) = n^{-\omega(1)}$ can ... more >>>
We give an almost quadratic $n^{2-o(1)}$ lower bound on the space consumption of any $o(\sqrt{\log n})$-pass streaming algorithm solving the (directed) $s$-$t$ reachability problem. This means that any such algorithm must essentially store the entire graph. As corollaries, we obtain almost quadratic space lower bounds for additional fundamental problems, including ... more >>>
In this work we prove that there is a function $f \in \textrm{E}^\textrm{NP}$ such that, for every sufficiently large $n$ and $d = \sqrt{n}/\log n$, $f_n$ ($f$ restricted to $n$-bit inputs) cannot be $(1/2 + 2^{-d})$-approximated by $\textrm{F}_2$-polynomials of degree $d$. We also observe that a minor improvement ...
more >>>
In certain complexity-theoretic settings, it is notoriously difficult to prove complexity separations which hold almost everywhere, i.e., for all but finitely many input lengths. For example, a classical open question is whether $\mathrm{NEXP} \subset \mathrm{i.o.-}\mathrm{NP}$; that is, it is open whether nondeterministic exponential time computations can be simulated on infinitely ... more >>>
Extending the classical ``hardness-to-randomness'' line-of-works, Doron et al. (FOCS 2020) recently proved that derandomization with near-quadratic time overhead is possible, under the assumption that there exists a function in $\mathcal{DTIME}[2^n]$ that cannot be computed by randomized SVN circuits of size $2^{(1-\epsilon)\cdot n}$ for a small $\epsilon$.
In this work we ... more >>>
We establish several ``sharp threshold'' results for computational complexity. For certain tasks, we can prove a resource lower bound of $n^c$ for $c \geq 1$ (or obtain an efficient circuit-analysis algorithm for $n^c$ size), there is strong intuition that a similar result can be proved for larger functions of $n$, ... more >>>
We prove that for all constants a, NQP = NTIME[n^{polylog(n)}] cannot be (1/2 + 2^{-log^a n})-approximated by 2^{log^a n}-size ACC^0 of THR circuits (ACC^0 circuits with a bottom layer of THR gates). Previously, it was even open whether E^NP can be (1/2+1/sqrt{n})-approximated by AC^0[2] circuits. As a straightforward application, ... more >>>
The Exponential-Time Hypothesis ($ETH$) is a strengthening of the $\mathcal{P} \neq \mathcal{NP}$ conjecture, stating that $3\text{-}SAT$ on $n$ variables cannot be solved in time $2^{\epsilon\cdot n}$, for some $\epsilon>0$. In recent years, analogous hypotheses that are ``exponentially-strong'' forms of other classical complexity conjectures (such as $\mathcal{NP}\not\subseteq\mathcal{BPP}$ or $co\text{-}\mathcal{NP}\not\subseteq \mathcal{NP}$) have ... more >>>
Hardness magnification reduces major complexity separations (such as $EXP \not\subseteq NC^1$) to proving lower bounds for some natural problem $Q$ against weak circuit models. Several recent works [OS18, MMW19, CT19, OPS19, CMMW19, Oli19, CJW19a] have established results of this form. In the most intriguing cases, the required lower bound is ... more >>>
In the Minimum Circuit Size Problem (MCSP[s(m)]), we ask if there is a circuit of size s(m) computing a given truth-table of length n = 2^m. Recently, a surprising phenomenon termed as hardness magnification by [Oliveira and Santhanam, FOCS 2018] was discovered for MCSP[s(m)] and the related problem MKtP of ... more >>>
Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems
A frontier open problem in circuit complexity is to prove P^NP is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P/poly. Previously, for several classes containing P^NP, including NP^NP, ZPP^NP, and ... more >>>
Consider the multiparty communication complexity model where there are n processors, each receiving as input a row of an n by n matrix M with entries in {0, 1}, and in each round each party can broadcast a single bit to all other parties (this is known as the BCAST(1) ... more >>>
Following the seminal work of [Williams, J. ACM 2014], in a recent breakthrough, [Murray and Williams, STOC 2018] proved that NQP (non-deterministic quasi-polynomial time) does not have polynomial-size ACC^0 circuits.
We strengthen the above lower bound to an average case one, by proving that for all constants c, ...
more >>>
The best-known lower bounds for the circuit class $\mathcal{TC}^0$ are only slightly super-linear. Similarly, the best-known algorithm for derandomization of this class is an algorithm for quantified derandomization (i.e., a weak type of derandomization) of circuits of slightly super-linear size. In this paper we show that even very mild quantitative ... more >>>
In this paper we study the (Bichromatic) Maximum Inner Product Problem (Max-IP), in which we are given sets $A$ and $B$ of vectors, and the goal is to find $a \in A$ and $b \in B$ maximizing inner product $a \cdot b$. Max-IP is very basic and serves ...
more >>>
In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis ... more >>>
In both query and communication complexity, we give separations between the class NISZK, containing those problems with non-interactive statistical zero knowledge proof systems, and the class UPP, containing those problems with randomized algorithms with unbounded error. These results significantly improve on earlier query separations of Vereschagin [Ver95] and Aaronson [Aar12] ... more >>>