ECCC - Reportshttps://example.com/Latest Reports published at https://eccc.weizmann.ac.ilen-usSat, 20 Oct 2018 14:28:18 +0300TR18-174 | Parity Decision Tree Complexity is Greater Than Granularity |
Vladimir Podolskii,
Anastasiya Chistopolskaya
https://eccc.weizmann.ac.il/report/2018/174We prove a new lower bound on the parity decision tree complexity $D_{\oplus}(f)$ of a Boolean function $f$. Namely, granularity of the Boolean function $f$ is the smallest $k$ such that all Fourier coefficients of $f$ are integer multiples of $1/2^k$. We show that $D_{\oplus}(f)\geq k+1$.
This lower bound is an improvement of the known lower bound through the sparsity of $f$. Using our lower bound we determine the exact parity decision tree complexity of several important Boolean functions including majority, recursive majority and $MOD^3$ function. For majority the complexity is $n - B(n)+1$, where $B(n)$ is the number of ones in the binary representation of $n$. For recursive majority the complexity is $\frac{n+1}{2}$. For $MOD^3$ the complexity is $n-1$ for $n$ divisible by 3 and is $n$ otherwise. Finally, we provide an example of a function for which our lower bound is not tight.Sat, 20 Oct 2018 14:28:18 +0300https://eccc.weizmann.ac.il/report/2018/174TR18-173 | The Non-Hardness of Approximating Circuit Size |
Eric Allender,
Rahul Ilango,
Neekon Vafa
https://eccc.weizmann.ac.il/report/2018/173The Minimum Circuit Size Problem (MCSP) has been the focus of intense study recently; MCSP is hard for SZK under rather powerful reductions, and is provably not hard under “local” reductions computable in TIME($n^{0.49}$). The question of whether MCSP is NP-hard (or indeed, hard even for small subclasses of P) under some of the more familiar notions of reducibility (such as many-one or Turing reductions computable in polynomial time or in AC$^0$) is closely related to many of the longstanding open questions in complexity theory.
All known hardness results for MCSP hold also for computing somewhat weak approximations to the circuit complexity of a function. Some of these results were proved by exploiting a connection to a notion of time-bounded Kolmogorov complexity (KT) and the corresponding decision problem (MKTP). More recently, a new approach for proving improved hardness results for MKTP was developed, but this approach establishes only hardness of extremely good approximations of the form $1+o(1)$, and these improved hardness results are not yet known to hold for MCSP. In particular, it is known that MKTP is hard for the complexity class DET under nonuniform $\leq^{AC^0}_m$ reductions, implying MKTP is not in AC$^0[p]$ for any prime $p$. It is still open if similar circuit lower bounds hold for MCSP. One possible avenue for proving a similar hardness result for MCSP would be to improve the hardness of approximation for MKTP beyond $1 + o(1)$ to $\omega(1)$. In this paper, we show that this is impossible.
More specifically, we prove that PARITY does not reduce to the problem of computing super-linear approximations to KT-complexity or circuit size via AC$^0$-Turing reductions that make $O(1)$ queries. This is significant, since it is known that just one query to a much worse approximation of circuit size or KT-complexity suffices, for an AC$^0$ reduction to compute an approximation to any set in P/poly. For weaker approximations, we also prove non-hardness under more powerful reductions. Our non-hardness results are unconditional, in contrast to conditional results presented in prior work (for more powerful reductions, but for much worse approximations). This highlights obstacles that would have to be overcome by any proof that MKTP or MCSP is hard for NP under AC$^0$ reductions. It may also be a step toward confirming a conjecture of Murray and Williams, that MCSP is not NP-complete under logtime-uniform $\leq^{AC^0}_m$ reductions.Wed, 17 Oct 2018 07:46:36 +0300https://eccc.weizmann.ac.il/report/2018/173TR18-172 | Building Strategies into QBF Proofs |
Olaf Beyersdorff,
Joshua Blinkhorn,
Meena Mahajan
https://eccc.weizmann.ac.il/report/2018/172Strategy extraction is of paramount importance for quantified Boolean formulas (QBF), both in solving and proof complexity. It extracts (counter)models for a QBF from a run of the solver resp. the proof of the QBF, thereby allowing to certify the solver's answer resp. establish soundness of the system. So far in the QBF literature, strategy extraction has been algorithmically performed from proofs. Here we devise the first QBF system where (partial) strategies are built into the proof and are piecewise constructed by simple operations along with the derivation.
This has several advantages: (1) lines of our calculus have a clear semantic meaning as they are accompanied by semantic objects; (2) partial strategies are represented succinctly (in contrast to some previous approaches); (3) our calculus has strategy extraction by design; and (4) the partial strategies allow new sound inference steps which are disallowed in previous central QBF calculi such as Q-Resolution and long-distance Q-Resolution.
The last item (4) allows us to show an exponential separation between our new system and the previously studied reductionless long-distance resolution calculus, introduced to model QCDCL solving.
Our approach also naturally lifts to dependency QBFs (DQBF), where it yields the first sound and complete CDCL-type calculus for DQBF, thus opening future avenues into DQBF CDCL solving.Sun, 14 Oct 2018 16:19:17 +0300https://eccc.weizmann.ac.il/report/2018/172TR18-171 | Testing Graphs in Vertex-Distribution-Free Models |
Oded Goldreich
https://eccc.weizmann.ac.il/report/2018/171Prior studies of testing graph properties presume that the tester can obtain uniformly distributed vertices in the tested graph (in addition to obtaining answers to the some type of graph-queries).
Here we envision settings in which it is only feasible to obtain random vertices drawn according to an arbitrary distribution (and, in addition, obtain answers to the usual graph-queries).
We initiate a study of testing graph properties in such settings, while adapting the definition of distance between graphs so that it reflects the different probability weight of different vertices.
Hence, the distance to the property represents the relative importance of the ``part of the graph'' that violates the property.
We consider such ``vertex-distribution free'' (VDF) versions of the two most-studied models of testing graph properties (i.e., the dense graph model and the bounded-degree model).
In both cases, we show that VDF testing within complexity that is independent of the size of the tested graph is possible only if the same property can be tested in the standard model with one-sided error and size-independent complexity.
We also show that this necessary condition is not sufficient; yet, we present size-independent VDF testers for many of the natural properties that satisfy the necessary condition.
Wed, 10 Oct 2018 16:02:31 +0300https://eccc.weizmann.ac.il/report/2018/171TR18-170 | Cops-Robber games and the resolution of Tseitin formulas |
Nicola Galesi,
Jacobo Toran,
Navid Talebanfard
https://eccc.weizmann.ac.il/report/2018/170We characterize several complexity measures for the resolution of Tseitin formulas in terms of a two person cop-robber game. Our game is a slight variation of the one Seymour and Thomas used in order to characterize the tree-width parameter. For any undirected graph, by counting the number of cops needed in our game in order to catch a robber in it, we are able to exactly characterize the width, variable space and depth measures for the resolution of the Tseitin formula corresponding to that graph. We also give an exact game characterization of resolution variable space for any formula.
We show that our game can be played in a monotone way. This implies that the associated resolution measures on Tseitin formulas correspond
exactly to those under the restriction of Davis-Putnam resolution, implying that this kind of resolution is optimal on Tseitin formulas for all the considered measures.
Using our characterizations we improve the existing complexity bounds for Tseitin formulas showing that resolution width, depth and variable space coincide up to a logarithmic factor, and that variable space is bounded by the clause space times a logarithmic factor.Sun, 07 Oct 2018 16:48:44 +0300https://eccc.weizmann.ac.il/report/2018/170TR18-169 | Optimality of Linear Sketching under Modular Updates |
Kaave Hosseini,
Shachar Lovett,
Grigory Yaroslavtsev
https://eccc.weizmann.ac.il/report/2018/169We study the relation between streaming algorithms and linear sketching algorithms, in the context of binary updates. We show that for inputs in $n$ dimensions,
the existence of efficient streaming algorithms which can process $\Omega(n^2)$ updates implies efficient linear sketching algorithms with comparable cost.
This improves upon the previous work of Li, Nguyen and Woodruff [LNW14] and Ai, Hu, Li and Woodruff [AHLW16] which required a triple-exponential number of updates to achieve a similar result for updates over integers. We extend our results to updates modulo $p$ for integers $p \ge 2$, and to approximation instead of exact computation.Sun, 07 Oct 2018 16:45:32 +0300https://eccc.weizmann.ac.il/report/2018/169TR18-168 | An upper bound on $\ell_q$ norms of noisy functions |
Alex Samorodnitsky
https://eccc.weizmann.ac.il/report/2018/168Let $T_{\epsilon}$ be the noise operator acting on functions on the boolean cube $\{0,1\}^n$. Let $f$ be a nonnegative function on $\{0,1\}^n$ and let $q \ge 1$. We upper bound the $\ell_q$ norm of $T_{\epsilon} f$ by the average $\ell_q$ norm of conditional expectations of $f$, given sets of roughly $(1-2\epsilon)^{r(q)} \cdot n$ variables, where $r$ is an explicitly defined function of $q$.
We describe some applications for error-correcting codes and for matroids. In particular, we derive an upper bound on the weight distribution of duals of BEC-capacity achieving binary linear codes. This improves the known bounds on the linear-weight components of the weight distribution of constant rate binary Reed-Muller codes for almost all rates.Sun, 07 Oct 2018 16:43:32 +0300https://eccc.weizmann.ac.il/report/2018/168TR18-167 | Improved bounds on Fourier entropy and Min-entropy |
Srinivasan Arunachalam,
Sourav Chakraborty,
Michal Koucky,
Nitin Saurabh,
Ronald de Wolf
https://eccc.weizmann.ac.il/report/2018/167Given a Boolean function $f: \{-1,1\}^n\rightarrow \{-1,1\}$, define the Fourier distribution to be the distribution on subsets of $[n]$, where each $S\subseteq [n]$ is sampled with probability $\widehat{f}(S)^2$. The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai [FK96] seeks to relate two fundamental measures associated with the Fourier distribution: does there exist a universal constant $C>0$ such that $\mathbb{H}(\hat{f}^2) \leq C\cdot Inf(f)$, where $\mathbb{H}(\hat{f}^2)$ is the Shannon entropy of the Fourier distribution of $f$ and $Inf(f)$ is the total influence of $f$.
In this paper we present three new contributions towards the FEI conjecture:
1) We first consider the weaker Fourier Min-entropy-Influence (FMEI) conjecture posed by O'Donnell and others [OWZ11,O'Donnell-book14] which asks if $\mathbb{H}_{\infty}(\hat{f}^2) \leq C\cdot Inf(f)$, where $\mathbb{H}_{\infty}(\hat{f}^2)$ is the min-entropy of the Fourier distribution. We show $\mathbb{H}_{\infty}(\hat{f}^2)\leq 2 {C}_{\min}^\oplus(f)$, where ${C}_{\min}^\oplus(f)$ is the minimum parity certificate complexity of $f$. We also show that for every $\varepsilon\geq 0$, we have $\mathbb{H}_{\infty}(\hat{f}^2)\leq 2\log (\|\widehat{f}\|_{1,\varepsilon}/(1-\varepsilon))$, where $\|\widehat{f}\|_{1,\varepsilon}$ is the approximate spectral norm of $f$. As a corollary, we verify the FMEI conjecture for the class of read-$k$ $DNF$s (for constant $k$). This improves upon a recent (independent) result of Shalev [Shalev18].
2) Our second contribution shows that $\mathbb{H}(\hat{f}^2)\leq 2 {aUC}^\oplus(f)$, where ${aUC}^\oplus(f)$ is the average unambiguous parity certificate complexity of $f$. This improves upon several bounds shown by Chakraborty et al. [CKLS16].
An important consequence of resolving the FEI conjecture is the long-standing conjecture of Mansour [Mansour95]. We show that a weaker question than the FEI conjecture would already imply Mansour's conjecture: is $\mathbb{H}(\hat{f}^2)\leq C\cdot \min\{{C}^0(f),{C}^1(f)\}$?, where ${C}^0(f)$ and ${C}^1(f)$ are the zero and one certificate complexities of $f$ respectively.
3) Our third contribution is to understand better an implication of the FEI conjecture relating to the structure of polynomials that $1/3$-approximate a Boolean function on the Boolean cube. We pose a conjecture: no flat polynomial (whose non-zero Fourier coefficients have the same magnitude) of degree $d$ and sparsity $2^{\omega(d)}$ can $1/3$-approximate a Boolean function. This conjecture is known to be true assuming FEI and we prove the conjecture unconditionally (i.e., without assuming the FEI conjecture) for a class of polynomials. We finally discuss an intriguing connection between our conjecture and the constant for the Bohnenblust-Hille inequality, which has been extensively studied in functional analysis.Sun, 07 Oct 2018 16:40:10 +0300https://eccc.weizmann.ac.il/report/2018/167TR18-166 | An overview of some semantic and syntactic complexity classes |
Tayfun Pay,
James Cox
https://eccc.weizmann.ac.il/report/2018/166We review some semantic and syntactic complexity classes that were introduced to better understand the relationship between complexity classes P and NP. We also define several new complexity classes, some of which are associated with Mersenne numbers, and show their location in the complexity hierarchy.Mon, 24 Sep 2018 19:11:26 +0300https://eccc.weizmann.ac.il/report/2018/166TR18-165 | Resolution and the binary encoding of combinatorial principles |
Stefan Dantchev,
Nicola Galesi,
Barnaby Martin
https://eccc.weizmann.ac.il/report/2018/165We investigate the size complexity of proofs in $RES(s)$ -- an extension of Resolution working on $s$-DNFs instead of clauses -- for families of contradictions given in the {\em unusual binary} encoding. A motivation of our work is size lower bounds of refutations in Resolution for families of contradictions in the {\em usual unary} encoding. Our main interest is the $k$-Clique Principle, whose Resolution complexity is still unknown. The approach is justified by the observation that for a large class of combinatorial principles (those expressible as $\Pi_2$ first-order formulae) short $RES(\log n)$ refutations for the binary encoding are reducible to short Resolution refutations of the unary encoding.
Our main result is a $n^{\Omega(k)}$ lower bound for the size of refutations of the binary $k$-Clique Principle in $RES(\lfloor \frac{1}{2}\log \log n\rfloor)$. This improves the result of Lauria, Pudl\'ak et al. [24] who proved the lower bound for Resolution, that is $\RES(1)$. A lower bound in $RES(\log n)$ for the binary $k$-Clique Principle would prove a lower bound in Resolution for its unary version. Resolution lower bounds for the (unary) $k$-Clique Principle are known only when refutations are either treelike [10] or read-once [4] (regular Resolution).
To contrast the proof complexity between the unary and binary encodings of combinatorial principles, we consider the binary (weak) Pigeonhole principle $BinPHP^m_n$ for $m>n$. Our second lower bound proves that in $RES(s)$ for $s\leq \log^{\frac{1}{2-\epsilon}}(n)$, the shortest proofs of the $BinPHP^m_n$, requires size $2^{n^{1-\delta}}$, for any $\delta>0$.
By a result of Buss and Pitassi [15] we know that for the (unary, weak) Pigeonhole principle $PHP^m_n$, exponential lower bounds (in the size of $PHP^m_n$) are not possible in Resolution when $m \geq 2^{\sqrt{n\log n}}$ since there is an upper bound of $2^{O(\sqrt{n\log n})}$. Our lower bound for $BinPHP^m_n$, together with the fact short $RES(1)$ refutations for $PHP^m_n$ can be translated into short $RES(\log n)$ proofs for $\BinPHP^m_n$, shows a form of tightness of the upper bound of [15]. Furthermore we prove that $BinPHP^m_n$ can be refuted in size $2^{\Theta(n)}$ in treelike $RES(1)$, contrasting with the unary case, where $PHP^m_n$ requires treelike $RES(1)$ refutations of size $2^{\Omega(n \log n)}$ [9,16].
In order to compare the complexity of refuting binary encodings in Resolution with respect to their unary version, we study under what conditions the complexity of refutations in Resolution will not increase significantly (more than a polynomial factor) when shifting between the unary encoding and the binary encoding. We show that this is true, from unary to binary, for propositional encodings of principles expressible as a
$\Pi_2$-formula and involving total variable comparisons. We then show that this is true, from binary to unary, when one considers the {\em functional unary encoding}. In particular, we derive a polynomial upper bound in $RES(1)$ for the binary version $bRLOP$ of a variant of the Linear Ordering principle, $RLOP$, which exponentially separates read-once Resolution from Resolution (see [2]). Finally we prove that the binary encoding of the general Ordering principle $OP$ -- with no total ordering constraints -- is polynomially provable in Resolution. These last results can be interpreted as addressing the property that shifting to the binary encoding is preserving the proof hardness of the corresponding unary encodings when working in Resolution. Sun, 23 Sep 2018 08:41:47 +0300https://eccc.weizmann.ac.il/report/2018/165