ECCC - Reportshttps://eccc.weizmann.ac.il/Latest Reports published at https://eccc.weizmann.ac.ilen-usWed, 08 Apr 2020 10:58:37 +0300TR20-044 | Cryptography from Information Loss |
Marshall Ball,
Elette Boyle,
Akshay Degwekar,
Apoorvaa Deshpande,
Alon Rosen,
Vinod Vaikuntanathan,
Prashant Vasudevan
https://eccc.weizmann.ac.il/report/2020/044Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former. The subject of this work is ``lossy'' reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into ``useful'' hardness, namely cryptography.
Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction $C$ is $t$-lossy if, for any distribution $X$ over its inputs, the mutual information $I(X;C(X)) \leq t$. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006).
We then proceed to show several consequences of lossy reductions:
1. We say that a language $L$ has an $f$-reduction to a language $L'$ for a Boolean function $f$ if there is a (randomized) polynomial-time algorithm $C$ that takes an $m$-tuple of strings $X = (x_1,\ldots,x_m)$, with each $x_i\in\{0,1\}^n$, and outputs a string $z$ such that with high probability, L'(z) = f(L(x_1),L(x_2),...,L(x_m))
2. Suppose a language $L$ has an $f$-reduction $C$ to $L'$ that is $t$-lossy. Our first result is that one-way functions exist if $L$ is worst-case hard and one of the following conditions holds:
- $f$ is the OR function, $t \leq m/100$, and $L'$ is the same as $L$
- $f$ is the Majority function, and $t \leq m/100$
- $f$ is the OR function, $t \leq O(m\log{n})$, and the reduction has no error
This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions.
3. Our second result is about the stronger notion of $t$-compressing $f$-reductions -- reductions that only output $t$ bits. We show that if there is an average-case hard language $L$ that has a $t$-compressing Majority reduction to some language for $t=m/100$, then there exist collision-resistant hash functions.
This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses).
Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest.
Wed, 08 Apr 2020 10:58:37 +0300https://eccc.weizmann.ac.il/report/2020/044TR20-043 | A combinatorial MA-complete problem |
Dorit Aharonov,
Alex Bredariol Grilo
https://eccc.weizmann.ac.il/report/2020/043Despite the interest in the complexity class MA, the randomized analog of NP, there is just a couple of known natural (promise-)MA-complete problems, the first due to Bravyi and Terhal (SIAM Journal of Computing 2009) and the second due to Bravyi (Quantum Information and Computation 2015). Surprisingly, both problems are stated using terminology from quantum computation. This fact makes it hard for classical complexity theorists to study these problems, and prevents possible progress, e.g., on the important question of derandomizing MA.
In this note we define a natural combinatorial problem called SetCSP and prove its MA-completeness. The problem generalizes the constraint satisfaction problem (CSP) into constraints on sets of strings. This note is, in fact, a combination of previously known works: the brilliant work of Bravyi and Terhal, together with an observation made in our previous work (Aharonov and Grilo, FOCS 2019) that a restricted case of the Bravyi and Terhal's MA complete problem (namely, the uniform case) is already complete, and moreover, that this restricted case can be stated using a classical, combinatorial description. Here we flesh out this observation.
This note, along with a translation of the main result of Aharonov and Grilo to the SetCSP language, implies that finding a gap-amplification procedure for SetCSP problems (namely a generalization to SetCSPs of the gap-amplification used in Dinur's PCP proof) would imply MA=NP. This would provide a resolution of the major problem of derandomizing MA; in fact, the problem of finding gap-amplification for SetCSP is in fact equivalent to the MA=NP problem.Sun, 05 Apr 2020 22:37:00 +0300https://eccc.weizmann.ac.il/report/2020/043TR20-042 | Poly-time blackbox identity testing for sum of log-variate constant-width ROABPs |
Pranav Bisht,
Nitin Saxena
https://eccc.weizmann.ac.il/report/2020/042Blackbox polynomial identity testing (PIT) affords 'extreme variable-bootstrapping' (Agrawal et al, STOC'18; PNAS'19; Guo et al, FOCS'19). This motivates us to study log-variate read-once oblivious algebraic branching programs (ROABP). We restrict width of ROABP to a constant and study the more general sum-of-ROABPs model. We give the first poly($s$)-time blackbox PIT for sum of constant-many, size-$s$, $O(\log s)$-variate constant-width ROABPs. The previous best for this model was quasi-polynomial time (Gurjar et al, CCC'15; CC'16) which is comparable to brute-force in the log-variate setting. Also, we handle unbounded-many such ROABPs if each ROABP computes a homogeneous polynomial.
Our new techniques comprise-- (1) an ROABP computing a homogeneous polynomial can be made syntactically homogeneous in the same width; and (2) overcome the hurdle of unknown variable order in sum-of-ROABPs in the log-variate setting (over any field).Tue, 31 Mar 2020 16:31:36 +0300https://eccc.weizmann.ac.il/report/2020/042TR20-041 | A Polynomial Degree Bound on Defining Equations of Non-rigid Matrices and Small Linear Circuits |
Mrinal Kumar,
Ben Lee Volk
https://eccc.weizmann.ac.il/report/2020/041We show that there is a defining equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: that is, we show that for every large enough field $\mathbb{F}$, there is a non-zero $n^2$-variate polynomial $P \in \mathbb{F}(x_{1, 1}, \ldots, x_{n, n})$ of degree at most poly(n) such that every matrix $M$ which can be written as a sum of a matrix of rank at most $n/100$ and sparsity at most $n^2/100$ satisfies $P(M) = 0$. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer and Landsberg [GHIL16] and improves the best upper bound known for this problem down from $\exp(n^2)$ [KLPS14, GHIL16] to $poly(n)$.
We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices $M$ such that the linear transformation represented by $M$ can be computed by an algebraic circuit with at most $n^2/200$ edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded.
Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [SV15] to construct low degree ``universal'' maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low degree annihilating polynomial completes the proof.
Sun, 29 Mar 2020 09:14:01 +0300https://eccc.weizmann.ac.il/report/2020/041TR20-040 | Topology and adjunction in promise constraint satisfaction |
Andrei Krokhin,
Jakub Opršal,
Marcin Wrochna,
Stanislav Zivny
https://eccc.weizmann.ac.il/report/2020/040 The approximate graph colouring problem concerns colouring a $k$-colourable
graph with $c$ colours, where $c\geq k$. This problem naturally generalises
to promise graph homomorphism and further to promise constraint satisfaction
problems. Complexity analysis of all these problems is notoriously difficult.
In this paper, we introduce two new techniques to analyse the complexity of
promise CSPs: one is based on topology and the other on adjunction. We apply
these techniques, together with the previously introduced algebraic approach,
to obtain new NP-hardness results for a significant class of approximate
graph colouring and promise graph homomorphism problems.Thu, 26 Mar 2020 16:25:32 +0200https://eccc.weizmann.ac.il/report/2020/040TR20-039 | Lower bounds on the sum of 25th-powers of univariates lead to complete derandomization of PIT |
Pranjal Dutta,
Nitin Saxena,
Thomas Thierauf
https://eccc.weizmann.ac.il/report/2020/039We consider the univariate polynomial $f_d:=(x+1)^d$ when represented as a sum of constant-powers of univariate polynomials. We define a natural measure for the model, the support-union, and conjecture that it is $\Omega(d)$ for $f_d$.
We show a stunning connection of the conjecture to the two main problems in algebraic complexity: Polynomial Identity Testing (PIT) and VP vs VNP. Our conjecture on $f_d$ implies blackbox-PIT in P. Assuming the Generalized Riemann Hypothesis (GRH), it also implies VP $\neq$ VNP. No such connection to PIT, from lower bounds on constant-powers representation of polynomials was known before. We establish that studying the expression of $(x+1)^d$, as the sum of $25^{th}$-powers of univariates, suffices to solve the two major open questions.
In support, we show that our conjecture holds over the integer ring of any number field. We also establish a connection with the well-studied notion of matrix rigidity. Wed, 25 Mar 2020 23:21:25 +0200https://eccc.weizmann.ac.il/report/2020/039TR20-038 | Error Correcting Codes for Uncompressed Messages |
Ofer Grossman,
Justin Holmgren
https://eccc.weizmann.ac.il/report/2020/038Most types of messages we transmit (e.g., video, audio, images, text) are not fully compressed, since they do not have known efficient and information theoretically optimal compression algorithms. When transmitting such messages, standard error correcting codes fail to take advantage of the fact that messages are not fully compressed.
We show that in this setting, it is sub-optimal to use standard error correction. We consider a model where there is a set of “valid messages” which the sender may send that may not be efficiently compressible, but where it is possible for the receiver to recognize valid messages. In this model, we construct a (probabilistic) encoding procedure that achieves better tradeoffs between data rates and error-resilience (compared to just applying a standard error correcting code).
Additionally, our techniques yield improved efficiently decodable (probabilistic) codes for fully compressed messages (the standard setting where the set of valid messages is all binary strings) in the high-rate regime.Sun, 22 Mar 2020 10:59:44 +0200https://eccc.weizmann.ac.il/report/2020/038TR20-037 | Failure of Feasible Disjunction Property for $k$-DNF Resolution and NP-hardness of Automating It |
Michal Garlik
https://eccc.weizmann.ac.il/report/2020/037We show that for every integer $k \geq 2$, the Res($k$) propositional proof system does not have the weak feasible disjunction property. Next, we generalize a recent result of Atserias and Müller [FOCS, 2019] to Res($k$). We show that if NP is not included in P (resp. QP, SUBEXP) then for every integer $k \geq 1$, Res($k$) is not automatable in polynomial (resp. quasi-polynomial, subexponential) time.Sat, 21 Mar 2020 13:53:33 +0200https://eccc.weizmann.ac.il/report/2020/037TR20-036 | Strong (D)QBF Dependency Schemes via Tautology-free Resolution Paths |
Olaf Beyersdorff,
Joshua Blinkhorn,
Tomáš Peitl
https://eccc.weizmann.ac.il/report/2020/036We suggest a general framework to study dependency schemes for dependency quantified Boolean formulas (DQBF). As our main contribution, we exhibit a new tautology-free DQBF dependency scheme that generalises the reflexive resolution path dependency scheme. We establish soundness of the tautology-free scheme, implying that it can be used in any DQBF proof system. We further explore the power of DQBF resolution systems parameterised by dependency schemes and show that our new scheme results in exponentially shorter proofs in comparison to the reflexive resolution path dependency scheme when used in the basic DQBF expansion proof system.
On QBFs, we demonstrate that our new scheme is exponentially stronger than the reflexive resolution path dependency scheme when used in Q Resolution, thus resulting in the strongest QBF dependency scheme known to date.Sun, 15 Mar 2020 11:16:59 +0200https://eccc.weizmann.ac.il/report/2020/036TR20-035 | No-Signaling MIPs and Faster-Than-Light Communication, Revisited |
Justin Holmgren
https://eccc.weizmann.ac.il/report/2020/035 We revisit one original motivation for the study of no-signaling multi-prover
interactive proofs (NS-MIPs): specifically, that two spatially separated
provers are guaranteed to be no-signaling by the physical principle that
information cannot travel from one point to another faster than light.
We observe that with more than two provers, the physical connection between
no-signaling and faster-than-light communication is more nuanced, depending on
the relative positioning of the provers. In particular, we observe that
provers are guaranteed to be no-signaling if and only if their positions are
convexly independent. Other prover positionings provide weaker guaranteees.
We consider a new issue that thus arises only in the many-prover setting:
how precisely must provers be positioned in order to guarantee the
soundness of a (NS-)MIP? We prove that substantially more precision is
required to guarantee full no-signaling than to guarantee soundness of a
specific NS-MIP for PSPACE implicit in the work of Kalai and Raz (CRYPTO 2009).
Sun, 15 Mar 2020 08:28:11 +0200https://eccc.weizmann.ac.il/report/2020/035