ECCC - Reportshttps://eccc.weizmann.ac.il/Latest Reports published at https://eccc.weizmann.ac.ilen-usWed, 14 Apr 2021 22:29:12 +0300TR21-054 | Encodings and the Tree Evaluation Problem |
Ian Mertz,
James Cook
https://eccc.weizmann.ac.il/report/2021/054We show that the Tree Evaluation Problem with alphabet size $k$ and height $h$ can be solved by branching programs of size $k^{O(h/\log h)} + 2^{O(h)}$. This answers a longstanding challenge of Cook et al. (2009) and gives the first general upper bound since the problem's inception.Wed, 14 Apr 2021 22:29:12 +0300https://eccc.weizmann.ac.il/report/2021/054TR21-053 | Information in propositional proofs and algorithmic proof search |
Jan Krajicek
https://eccc.weizmann.ac.il/report/2021/053We study from the proof complexity perspective the (informal) proof search problem:
Is there an optimal way to search for propositional proofs?
We note that for any fixed proof system there exists a time-optimal proof search algorithm. Using classical proof complexity results about reflection principles we prove that a time-optimal proof search algorithm exists w.r.t. all proof systems iff a p-optimal proof system exists.
To characterize precisely the time proof search algorithms need for individual formulas we introduce a new proof complexity measure based on algorithmic information concepts. In particular, to a proof system P we attach {\bf information-efficiency function} $i_P(\tau)$ assigning to a tautology a natural number, and we show that:
- $i_P(\tau)$ characterizes time any $P$-proof search algorithm has to use on $\tau$ and that for a fixed $P$ there is such an information-optimal algorithm,
- a proof system is information-efficiency optimal iff it is p-optimal,
- for non-automatizable systems $P$ there are formulas $\tau$ with short proofs but having large information measure $i_P(\tau)$.
We isolate and motivate the problem to establish {\em unconditional} super-logarithmic lower bounds for $i_P(\tau)$ where no super-polynomial size lower bounds are known. We also point out connections of the new measure with some topics in proof complexity other than proof search. Tue, 13 Apr 2021 10:34:11 +0300https://eccc.weizmann.ac.il/report/2021/053TR21-052 | Upslices, Downslices, and Secret-Sharing with Complexity of $1.5^n$ |
Oded Nir,
Benny Applebaum
https://eccc.weizmann.ac.il/report/2021/052A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$.
The collection of authorized/unauthorized sets can be captured by a monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$.
In this paper, we focus on monotone functions that all their min-terms are sets of size $a$, and on their duals -- monotone functions whose max-terms are of size $b$. We refer to these classes as $(a,n)$-upslices and $(b,n)$-downslices, and note that these natural families correspond to monotone $a$-regular DNFs and monotone $(n-b)$-regular CNFs. We derive the following results.
1. (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes.
2. (Random mixture of upslices) Following Beimel and Farras (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $k=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and
can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for ``exponentially-hard'' access structure.
We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.Mon, 12 Apr 2021 20:17:39 +0300https://eccc.weizmann.ac.il/report/2021/052TR21-051 | Binary Interactive Error Resilience Beyond $1/8$ (or why $(1/2)^3 > 1/8$) |
Raghuvansh Saxena,
Klim Efremenko,
Gillat Kol
https://eccc.weizmann.ac.il/report/2021/051Interactive error correcting codes are codes that encode a two party communication protocol to an error-resilient protocol that succeeds even if a constant fraction of the communicated symbols are adversarially corrupted, at the cost of increasing the communication by a constant factor. What is the largest fraction of corruptions that such codes can protect against?
If the error-resilient protocol is allowed to communicate large (constant sized) symbols, Braverman and Rao (STOC, 2011) show that the maximum rate of corruptions that can be tolerated is $1/4$. They also give a binary interactive error correcting protocol that only communicates bits and is resilient to $1/8$ fraction of errors, but leave the optimality of this scheme as an open problem.
We answer this question in the negative, breaking the $1/8$ barrier. Specifically, we give a binary interactive error correcting scheme that is resilient to $5/39 > 1/8$ fraction of adversarial errors. Our scheme builds upon a novel construction of binary list-decodable interactive codes with small list size.Fri, 09 Apr 2021 07:15:46 +0300https://eccc.weizmann.ac.il/report/2021/051TR21-050 | Linear Threshold Secret-Sharing with Binary Reconstruction |
Marshall Ball,
Alper Cakan,
Tal Malkin
https://eccc.weizmann.ac.il/report/2021/050Motivated in part by applications in lattice-based cryptography, we initiate the study of the size of linear threshold (`$t$-out-of-$n$') secret-sharing where the linear reconstruction function is restricted to coefficients in $\{0,1\}$. We prove upper and lower bounds on the share size of such schemes. One ramification of our results is that a natural variant of Shamir's classic scheme [Comm. of ACM, 1979], where bit-decomposition is applied to each share, is optimal for when the underlying field has characteristic 2. Another ramification is that schemes obtained from some monotone formulae are optimal for certain threshold values when the field's characteristic is any constant. We prove our results by defining and investigating an equivalent variant of Karchmer and Wigderson's Monotone Span Programs [CCC, 1993].
We also study the complexity such schemes with the additional requirement that the joint distribution of the shares of any unauthorized set of parties is not only independent of the secret, but also uniformly distributed. This property is critical for security of certain applications in lattice-based cryptography. We show that any such scheme must use $\Omega(n\log n)$ field elements, regardless of the field. Moreover, for any field this is tight up to constant factors for the special cases where any $t=n-1$ parties can reconstruct, as well as for any threshold when the field characteristic is 2.Sun, 04 Apr 2021 09:36:04 +0300https://eccc.weizmann.ac.il/report/2021/050TR21-049 | Kolmogorov complexity and nondeterminism versus determinism for polynomial time computations |
Juraj Hromkovic
https://eccc.weizmann.ac.il/report/2021/049We call any consistent and sufficiently powerful formal theory that enables to algorithmically in polynomial time verify whether a text is a proof \textbf{efficiently verifiable mathematics} (ev-mathematics). We study the question whether nondeterminism is more powerful than determinism for polynomial time computations in the framework of ev-mathematics. Our main results are as follows. \\
"$\P \subsetneq \NP$ or for any deterministic, polynomial time compression algorithm $A$ there exists a nondeterministic, polynomial time compression machine $M$ that reduces infinitely many binary strings logarithmically stronger than $A$." \\
"$\P \subsetneq \NP$ or f-time resource bounded Kolmogorov complexity of any binary string $x$ can be computed in deterministic polynomial time for each polynomial time constructible function $f$."Thu, 01 Apr 2021 15:52:24 +0300https://eccc.weizmann.ac.il/report/2021/049TR21-048 | Better Pseudodistributions and Derandomization for Space-Bounded Computation |
William Hoza
https://eccc.weizmann.ac.il/report/2021/048Three decades ago, Nisan constructed an explicit pseudorandom generator (PRG) that fools width-$n$ length-$n$ read-once branching programs (ROBPs) with error $\varepsilon$ and seed length $O(\log^2 n + \log n \cdot \log(1/\varepsilon))$ (Combinatorica 1992). Nisan's generator remains the best explicit PRG known for this important model of computation. However, a recent line of work starting with Braverman, Cohen, and Garg (Braverman, Cohen, and Garg SICOMP 2020; Chattopadhyay and Liao CCC 2020; Cohen, Doron, Renard, Sberlo, and Ta-Shma ECCC 2021; Pyne and Vadhan ECCC 2021) has shown how to construct *weighted* pseudorandom generators (WPRGs, aka pseudorandom pseudodistribution generators) with better seed lengths than Nisan's generator when the error parameter $\varepsilon$ is small.
In this work, we present an explicit WPRG for width-$n$ length-$n$ ROBPs with seed length $O(\log^2 n + \log(1/\varepsilon))$. Our seed length eliminates $\log \log$ factors from prior constructions, and our generator completes this line of research in the sense that further improvements would require beating Nisan's generator in the standard constant-error regime. Our technique is a variation of a recently-discovered reduction that converts moderate-error PRGs into low-error WPRGs (Cohen et al. ECCC 2021; Pyne and Vadhan ECCC 2021). Our version of the reduction uses averaging samplers.
We also point out that as a consequence of the recent work on WPRGs, any randomized space-$S$ decision algorithm can be simulated deterministically in space $O(S^{3/2} / \sqrt{\log S})$. This is a slight improvement over Saks and Zhou's celebrated $O(S^{3/2})$ bound (JCSS 1999). For this application, our improved WPRG is not necessary.Sat, 27 Mar 2021 11:27:49 +0300https://eccc.weizmann.ac.il/report/2021/048TR21-047 | Random restrictions and PRGs for PTFs in Gaussian Space |
Zander Kelley,
Raghu Meka
https://eccc.weizmann.ac.il/report/2021/047A polynomial threshold function (PTF) $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a function of the form $f(x) = sign(p(x))$ where $p$ is a polynomial of degree at most $d$. PTFs are a classical and well-studied complexity class with applications across complexity theory, learning theory, approximation theory, quantum complexity and more. We address the question of designing pseudorandom generators (PRG) for polynomial threshold functions (PTFs) in the gaussian space: design a PRG that takes a seed of few bits of randomness and outputs a $n$-dimensional vector whose distribution is indistinguishable from a standard multivariate gaussian by a degree $d$ PTF.
Our main result is a PRG that takes a seed of $d^{O(1)}\log ( n / \varepsilon)\log(1/\varepsilon)/\varepsilon^2$ random bits with output that cannot be distinguished from $n$-dimensional gaussian distribution with advantage better than $\varepsilon$ by degree $d$ PTFs. The best previous generator due to O'Donnell, Servedio, and Tan (STOC'20) had a quasi-polynomial dependence (i.e., seedlength of $d^{O(\log d)}$) in the degree $d$. Along the way we prove a few nearly-tight structural properties of restrictions of PTFs that may be of independent interest.Fri, 26 Mar 2021 13:15:24 +0300https://eccc.weizmann.ac.il/report/2021/047TR21-046 | Fourier Growth of Parity Decision Trees |
Uma Girish,
Avishay Tal,
Kewen Wu
https://eccc.weizmann.ac.il/report/2021/046We prove that for every parity decision tree of depth $d$ on $n$ variables, the sum of absolute values of Fourier coefficients at level $\ell$ is at most $d^{\ell/2} \cdot O(\ell \cdot \log(n))^\ell$.
Our result is nearly tight for small values of $\ell$ and extends a previous Fourier bound for standard decision trees by Sherstov, Storozhenko, and Wu (STOC, 2021).
As an application of our Fourier bounds, using the results of Bansal and Sinha (STOC, 2021), we show that the $k$-fold Forrelation problem has (randomized) parity decision tree complexity $\tilde{\Omega}\left(n^{1-1/k}\right)$, while having quantum query complexity $\lceil k/2\rceil$.
Our proof follows a random-walk approach, analyzing the contribution of a random path in the decision tree to the level-$\ell$ Fourier expression.
To carry the argument, we apply a careful cleanup procedure to the parity decision tree, ensuring that the value of the random walk is bounded with high probability. We observe that step sizes for the level-$\ell$ walks can be computed by the intermediate values of level $\le \ell-1$ walks, which calls for an inductive argument.
Our approach differs from previous proofs of Tal (FOCS, 2020) and Sherstov, Storozhenko, and Wu (STOC, 2021) that relied on decompositions of the tree. In particular, for the special case of standard decision trees we view our proof as slightly simpler and more intuitive.
In addition, we prove a similar bound for noisy decision trees of cost at most $d$ -- a model that was recently introduced by Ben-David and Blais (FOCS, 2020).Thu, 25 Mar 2021 05:37:04 +0200https://eccc.weizmann.ac.il/report/2021/046TR21-045 | Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear Circuits |
Vishwas Bhargava,
Shubhangi Saraf,
Ilya Volkovich
https://eccc.weizmann.ac.il/report/2021/045We give new and efficient black-box reconstruction algorithms for some classes of depth-$3$ arithmetic circuits. As a consequence, we obtain the first efficient algorithm for computing the tensor rank and for finding the optimal tensor decomposition as a sum of rank-one tensors when then input is a {\it constant-rank} tensor. More specifically, we provide efficient learning algorithms that run in randomized polynomial time over general fields and in deterministic polynomial time over $\mathbb R$ and $\mathbb C$ for the following classes:
\begin{enumerate}
\item {\it Set-multilinear depth-$3$ circuits of constant top fan-in ($\Sigma\Pi\Sigma_{{\sqcup_j X_j}}(k)$ circuits)}. As a consequence of our algorithm, we obtain the first polynomial time algorithm for tensor rank computation and optimal tensor decomposition of constant-rank tensors. This result holds for $d$ dimensional tensors for any $d$, but is interesting even for $d=3$.
\item {\it Sums of powers of constantly many linear forms ($\Sigma\!\wedge\!\Sigma(k)$ circuits)}. As a consequence we obtain the first polynomial-time algorithm for tensor rank computation and optimal tensor decomposition of constant-rank symmetric tensors.
\item {\it Multilinear depth-3 circuits of constant top fan-in (multilinear $\Sigma\Pi\Sigma(k)$ circuits)}. Our algorithm works over all fields of characteristic 0 or large enough characteristic. Prior to our work the only efficient algorithms known were over polynomially-sized finite fields \cite{KarninShpilka09}.
\end{enumerate}
Prior to our work, the only polynomial-time or even subexponential-time algorithms known (deterministic or randomized) for subclasses of $\Sigma\Pi\Sigma(k)$ circuits that also work over large/infinite fields were for the setting when the top fan-in $k$ is at most $2$ \cite{Sin16, Sin20}.
Mon, 22 Mar 2021 20:47:56 +0200https://eccc.weizmann.ac.il/report/2021/045