ECCC - Reportshttps://eccc.weizmann.ac.il/Latest Reports published at https://eccc.weizmann.ac.ilen-usMon, 21 Jun 2021 19:17:33 +0300TR21-085 | The Final Nail in the Coffin of Statistically-Secure Obfuscator. |
Ilya Volkovich
https://eccc.weizmann.ac.il/report/2021/085We present an elementary, self-contained proof of the result of Goldwasser and Rothblum [GR07] that the existence of a (perfect) statistically secure obfuscator implies a collapse of the polynomial hierarchy. In fact, we show that an existence of a weaker object implies a somewhat stronger statement. In addition, we extend the result of [GR07] to the case of imperfect statistically secure obfuscator.Mon, 21 Jun 2021 19:17:33 +0300https://eccc.weizmann.ac.il/report/2021/085TR21-084 | PCPs and Instance Compression from a Cryptographic Lens |
Ron Rothblum,
Liron Bronfman
https://eccc.weizmann.ac.il/report/2021/084Modern cryptography fundamentally relies on the assumption that the adversary trying to break the scheme is computationally bounded. This assumption lets us construct cryptographic protocols and primitives that are known to be impossible otherwise. In this work we explore the effect of bounding the adversary's power in other information theoretic proof-systems and show how to use this assumption to bypass impossibility results.
We first consider the question of constructing succinct PCPs. These are PCPs whose length is polynomial only in the length of the original NP witness (in contrast to standard PCPs whose length is proportional to the non-deterministic verification time).
Unfortunately, succinct PCPs are known to be impossible to construct under standard complexity assumptions. Assuming the sub-exponential hardness of the learning with errors (LWE) problem, we construct succinct probabilistically checkable arguments or PCAs (Zimand 2001, Kalai and Raz 2009), which are PCPs in which soundness is guaranteed against efficiently generated false proofs. Our PCA construction is for every NP relation that can be verified by a small-depth circuit (e.g., SAT, clique, TSP, etc.) and in contrast to prior work is publicly verifiable and has constant query complexity. Curiously, we also show, as a proof-of-concept, that such publicly-verifiable PCAs can be used to derive hardness of approximation results.
Second, we consider the notion of Instance Compression (Harnik and Naor, 2006). An instance compression scheme lets one compress, for example, a CNF formula $\varphi$ on $m$ variables and $n \gg m$ clauses to a new formula $\varphi'$ with only $poly(m)$ clauses, so that $\varphi$ is satisfiable if and only if $\varphi'$ is satisfiable. Instance compression has been shown to be closely related to succinct PCPs and is similarly highly unlikely to exist. We introduce a computational analog of instance compression in which we require that if $\varphi$ is unsatisfiable then $\varphi'$ is effectively unsatisfiable, in the sense that it is computationally infeasible to find a satisfying assignment for $\varphi'$ (although such an assignment may exist). Assuming the same sub-exponential LWE assumption, we construct such computational instance compression schemes for every bounded-depth NP relation. As an application, this lets one compress $k$ formulas $\phi_1,\dots,\phi_k$ into a single short formula $\phi$ that is effectively satisfiable if and only if at least one of the original formulas was satisfiable.Mon, 21 Jun 2021 13:53:41 +0300https://eccc.weizmann.ac.il/report/2021/084TR21-083 | Tight Space Complexity of the Coin Problem |
Mark Braverman,
Sumegha Garg,
Or Zamir
https://eccc.weizmann.ac.il/report/2021/083In the coin problem we are asked to distinguish, with probability at least $2/3$, between $n$ $i.i.d.$ coins which are heads with probability $\frac{1}{2}+\beta$ from ones which are heads with probability $\frac{1}{2}-\beta$. We are interested in the space complexity of the coin problem, corresponding to the width of a read-once branching program solving the problem.
The coin problem becomes more difficult as $\beta$ becomes smaller. Statistically, it can be solved whenever $\beta = \Omega(n^{-1/2})$, using counting. It has been previously shown that for $\beta = O(n^{-1/2})$, counting is essentially optimal (equivalently, width $poly(n)$ is necessary [Braverman-Garg-Woodruff FOCS'20]). On the other hand, the coin problem only requires $O(\log n)$ width for $\beta>n^{-c}$ for any constant $c>\log_2(\sqrt{5}-1)\approx 0.306$ (following low-width simulation of AND-OR tree of [Valiant Journal of Algorithms'84]).
In this paper, we close the gap between the bounds, showing a tight threshold between the values of $\beta=n^{-c}$ where $O(\log n)$ width suffices and the regime where $poly(n)$ width is needed, with a transition at $c=1/3$. This gives a complete characterization (up to constant factors) of the memory complexity of solving the coin problem, for all values of bias $\beta$.
We introduce new techniques in both bounds. For the upper bound, we give a construction based on recursive majority that does not require a memory stack of size $\log n$ bits. For the lower bound, we introduce new combinatorial techniques for analyzing progression of the success probabilities in read-once branching programs.Mon, 21 Jun 2021 02:01:29 +0300https://eccc.weizmann.ac.il/report/2021/083TR21-082 | Hardness on any Samplable Distribution Suffices: New Characterizations of One-Way Functions by Meta-Complexity |
Rahul Santhanam,
Hanlin Ren,
Rahul Ilango
https://eccc.weizmann.ac.il/report/2021/082We show that one-way functions exist if and only if there is some samplable distribution D such that it is hard to approximate the Kolmogorov complexity of a string sampled from D. Thus we characterize the existence of one-way functions by the average-case hardness of a natural \emph{uncomputable} problem on samplable distributions, extending a recent line of work by Liu and Pass (FOCS'20, STOC'21) and Ren and Santhanam (CCC'21).
We also show that the average-case hardness of approximating Minimum Circuit Size on a locally samplable distribution (where the sampler runs in sub-linear time by using random access to its input) is equivalent to the existence of one-way functions. This is the first characterization of one-way functions by a natural average-case hardness assumption on the Minimum Circuit Size Problem. We present several other characterizations and connections between one-way functions and average-case hardness of meta-complexity problems (problems about complexity) on samplable distributions.
We give various applications of these results to the foundations of cryptography and the theory of meta-complexity. We show that the average-case hardness of deciding k-SAT or Clique on any samplable distribution of high enough entropy implies the existence of one-way functions. Thus one-way functions follow from general assumptions on the average-case hardness of NP-complete problems. We observe that our assumptions are implied by standard cryptographic assumptions such as the Planted Clique hypothesis and the pseudorandomness of Goldreich's local functions.
Our results imply a range of \emph{equivalences} between various meta-complexity problems, showing that the theory of meta-complexity is very \emph{robust} when considering average-case complexity. We use our results to unconditionally solve various meta-complexity problems in CZK (computational zero-knowledge) on average, and give implications of our results for the classic question of proving NP-hardness for the Minimum Circuit Size Problem.Wed, 16 Jun 2021 18:57:38 +0300https://eccc.weizmann.ac.il/report/2021/082TR21-081 | Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits |
Sébastien Tavenas,
Srikanth Srinivasan,
Nutan Limaye
https://eccc.weizmann.ac.il/report/2021/081An Algebraic Circuit for a polynomial $P\in F[x_1,\ldots,x_N]$ is a computational model for constructing the polynomial $P$ using only additions and multiplications. It is a \emph{syntactic} model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field), while constant-depth Boolean circuit lower bounds have been known since the early 1980s.
In this paper, we prove the first superpolynomial lower bounds against general algebraic circuits of all constant depths over all fields of characteristic $0$ (or large). We also prove the first lower bounds against homogeneous algebraic circuits of constant depth over any field.
Our approach is surprisingly simple. We first prove superpolynomial lower bounds for constant-depth Set-Multilinear circuits. While strong lower bounds were already known against such circuits, most previous lower bounds were of the form $f(d)\cdot poly(N)$, where $d$ denotes the degree of the polynomial. In analogy with Parameterized complexity, we call this an FPT lower bound. We extend a well-known technique of Nisan and Wigderson (FOCS 1995) to prove non-FPT lower bounds against constant-depth set-multilinear circuits computing the Iterated Matrix Multiplication polynomial $IMM_{n,d}$ (which computes a fixed entry of the product of $d$ $n\times n$ matrices). More precisely, we prove that any set-multilinear circuit of depth $\Delta$ computing $IMM_{n,d}$ must have size at least $n^{d^{\exp(-O(\Delta))}}.$ This result holds over any field.
We then show how to convert any constant-depth algebraic circuit of size $s$ to a constant-depth set-multilinear circuit with a blow-up in size that is exponential in $d$ but only polynomial in $s$ over fields of characteristic $0$. (For depths greater than $3$, previous results of this form increased the depth of the resulting circuit to $\Omega(\log s)$.) This implies our constant-depth circuit lower bounds for $d$ that is a slow-growing function of $n$.
Finally, we observe that our superpolynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small depth circuits using the known connection between algebraic hardness and randomness. Tue, 15 Jun 2021 08:50:00 +0300https://eccc.weizmann.ac.il/report/2021/081TR21-080 | Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise |
Lijie Chen,
Roei Tell
https://eccc.weizmann.ac.il/report/2021/080We propose a new approach to the hardness-to-randomness framework and to the promise-BPP=promise-P conjecture. Classical results rely on non-uniform hardness assumptions to construct derandomization algorithms that work in the worst-case, or rely on uniform hardness assumptions to construct derandomization algorithms that work only in the average-case. In both types of results, the derandomization algorithm is ``black-box'' and uses the standard PRG approach. In this work we present results that closely relate **new and natural uniform hardness assumptions** to **worst-case derandomization** of promise-BPP, where the algorithms underlying the latter derandomization are **non-black-box**.
In our main result, we show that promise-BPP=promise-P if the following holds: There exists a multi-output function computable by logspace-uniform circuits of polynomial size and depth $n^2$ that cannot be computed by uniform probabilistic algorithms in time $n^c$, for some universal constant $c>1$, on **almost all inputs**. The required failure on ``almost all inputs'' is stronger than the standard requirement of failing on one input of each length; however, the same assumption without the depth restriction on $f$ is **necessary** for the conclusion. This suggests a potential equivalence between worst-case derandomization of promise-BPP of any form (i.e., not necessarily by a black-box algorithm) and the existence of efficiently computable functions that are hard for probabilistic algorithms on almost all inputs.
In our second result, we introduce a new and uniform hardness-to-randomness tradeoff for the setting of **superfast average-case derandomization**; prior to this work, superfast average-case derandomization was known only under non-uniform hardness assumptions. In an extreme instantiation of our new tradeoff, under appealing uniform hardness assumptions, we show that for every polynomial $T(n)$ and constant $\epsilon>0$ it holds that $BPTIME[T]\subseteq heur-DTIME[T\cdot n^{\epsilon}]$, where the ``heur'' prefix means that no polynomial-time algorithm can find, with non-negligible probability, an input on which the deterministic simulation errs.
Technically, our approach is to design **targeted PRGs and HSGs**, as introduced by Goldreich (LNCS, 2011). The targeted PRGs/HSGs ``produce randomness from the input'', as suggested by Goldreich and Wigderson (RANDOM 2002); and their analysis relies on non-black-box versions of the reconstruction procedure of Impagliazzo and Wigderson (FOCS 1998). Our main reconstruction procedure crucially relies on the ideas underlying the proof system of Goldwasser, Kalai, and Rothblum (J. ACM 2015).Thu, 10 Jun 2021 10:46:31 +0300https://eccc.weizmann.ac.il/report/2021/080TR21-079 | The zero-rate threshold for adversarial bit-deletions is less than 1/2 |
Venkatesan Guruswami,
Xiaoyu He,
Ray Li
https://eccc.weizmann.ac.il/report/2021/079We prove that there exists an absolute constant $\delta>0$ such any binary code $C\subset\{0,1\}^N$ tolerating $(1/2-\delta)N$ adversarial deletions must satisfy $|C|\le 2^{\poly\log N}$ and thus have rate asymptotically approaching $0$. This is the first constant fraction improvement over the trivial bound that codes tolerating $N/2$ adversarial deletions must have rate going to $0$ asymptotically. Equivalently, we show that there exists absolute constants $A$ and $\delta>0$ such that any set $C\subset\{0,1\}^N$ of $2^{\log^A N}$ binary strings must contain two strings $c$ and $c'$ whose longest common subsequence has length at least $(1/2+\delta)N$. As an immediate corollary, we show that $q$-ary codes tolerating a fraction $1-(1+2\delta)/q$ of adversarial deletions must also have rate approaching $0$.
Our techniques include string regularity arguments and a structural lemma that classifies binary strings by their oscillation patterns. Leveraging these tools, we find in any large code two strings with similar oscillation patterns, which is exploited to find a long common subsequence.Wed, 09 Jun 2021 19:59:13 +0300https://eccc.weizmann.ac.il/report/2021/079TR21-078 | A direct product theorem for quantum communication complexity with applications to device-independent QKD |
Rahul Jain,
Srijita Kundu
https://eccc.weizmann.ac.il/report/2021/078We give a direct product theorem for the entanglement-assisted interactive quantum communication complexity of an $l$-player predicate $V$. In particular we show that for a distribution $p$ that is product across the input sets of the $l$ players, the success probability of any entanglement-assisted quantum communication protocol for computing $n$ copies of $V$, whose communication is $o(\log(\mathrm{eff}^*(V,p))\cdot n)$, goes down exponentially in $n$. Here $\mathrm{eff}^*(V, p)$ is a distributional version of the quantum efficiency or partition bound introduced by Laplante, Lerays and Roland (2014), which is a lower bound on the distributional quantum communication complexity of computing a single copy of $V$ with respect to $p$.
As an application of our result, we show that it is possible to do device-independent quantum key distribution (DIQKD) without the assumption that devices do not leak any information after inputs are provided to them. We analyze the DIQKD protocol given by Jain, Miller and Shi (2017), and show that when the protocol is carried out with devices that are compatible with $n$ copies of the Magic Square game, it is possible to extract $\Omega(n)$ bits of key from it, even in the presence of $O(n)$ bits of leakage. Our security proof is parallel, i.e., the honest parties can enter all their inputs into their devices at once, and works for a leakage model that is arbitrarily interactive, i.e., the devices of the honest parties Alice and Bob can exchange information with each other and with the eavesdropper Eve in any number of rounds, as long as the total number of bits or qubits communicated is bounded.Tue, 08 Jun 2021 15:31:43 +0300https://eccc.weizmann.ac.il/report/2021/078TR21-077 | Lower Bounds on Stabilizer Rank |
Shir Peleg,
Amir Shpilka,
Ben Lee Volk
https://eccc.weizmann.ac.il/report/2021/077The stabilizer rank of a quantum state $\psi$ is the minimal $r$ such that $\left| \psi \right \rangle = \sum_{j=1}^r c_j \left|\varphi_j \right\rangle$ for $c_j \in \mathbb{C}$ and stabilizer states $\varphi_j$. The running time of several classical simulation methods for quantum circuits is determined by the stabilizer rank of the $n$-th tensor power of single-qubit magic states.
We prove a lower bound of $\Omega(n)$ on the stabilizer rank of such states, improving a previous lower bound of $\Omega(\sqrt{n})$ of Bravyi, Smith and Smolin [BSS16]. Further, we prove that for a sufficiently small constant $\delta$, the stabilizer rank of any state which is $\delta$-close to those states is $\Omega(\sqrt{n}/\log n)$. This is the first non-trivial lower bound for approximate stabilizer rank.
Our techniques rely on the representation of stabilizer states as quadratic functions over affine subspaces of $\mathbb{F}_2^n$, and we use tools from analysis of boolean functions and complexity theory. The proof of the first result involves a careful analysis of directional derivatives of quadratic polynomials, whereas the proof of the second result uses Razborov-Smolensky low degree polynomial approximations and correlation bounds against the majority function.
Sun, 06 Jun 2021 22:23:20 +0300https://eccc.weizmann.ac.il/report/2021/077TR21-076 | Pseudorandom Generators, Resolution and Heavy Width |
Dmitry Sokolov
https://eccc.weizmann.ac.il/report/2021/076Following the paper of Alekhnovich, Ben-Sasson, Razborov, Wigderson \cite{ABRW04} we call a pseudorandom generator $\mathrm{PRG}\colon \{0, 1\}^n \to \{0, 1\}^m$ hard for for a propositional proof system $\mathrm{P}$ if $\mathrm{P}$ cannot efficiently prove the (properly encoded) statement $b \notin \mathrm{Im}(\mathrm{PRG})$ for any string $b \in \{0, 1\}^m$.
In \cite{ABRW04} authors suggested the ``functional encoding'' of considered statement for Nisan--Wigderson generator that allows the introduction of ``local'' extension variables. These extension variables may potentially significantly increase the power of the proof system. In \cite{ABRW04} authors gave a lower bound $\exp\left[\frac{n^2}{m \Omega\left(2^{2^{\Delta}}\right)}\right]$ on the length of Resolution proofs where $\Delta$ is the degree of the dependency graph of the generator. This lower bound meets the barrier for the restriction technique.
In this paper, we introduce a ``heavy width'' measure for Resolution that allows showing a lower bound $\exp\left[\frac{n^2}{m 2^{O(\varepsilon \Delta)}}\right]$ on the length of Resolution proofs of the considered statement for the Nisan--Wigderson generator. This gives an exponential lower bound up to $\Delta := \log^{2 - \delta} n$ (the bigger degree the more extension variables we can use). It is a solution to an open problem from \cite{ABRW04}.Fri, 04 Jun 2021 17:38:44 +0300https://eccc.weizmann.ac.il/report/2021/076