ECCC - Reportshttps://example.com/Latest Reports published at https://eccc.weizmann.ac.ilen-usThu, 22 Jun 2017 17:35:29 +0300TR17-111 | A Lower Bound for Nonadaptive, One-Sided Error Testing of Unateness of Boolean Functions over the Hypercube |
Roksana Baleshzar,
Deeparnab Chakrabarty,
Ramesh Krishnan S. Pallavoor,
Sofya Raskhodnikova,
C. Seshadhri
https://eccc.weizmann.ac.il/report/2017/111A Boolean function $f:\{0,1\}^d \to \{0,1\}$ is unate if, along each coordinate, the function is either nondecreasing or nonincreasing. In this note, we prove that any nonadaptive, one-sided error unateness tester must make $\Omega(\frac{d}{\log d})$ queries. This result improves upon the $\Omega(\frac{d}{\log^2 d})$ lower bound for the same class of testers due to Chen et al. (STOC, 2017).Thu, 22 Jun 2017 17:35:29 +0300https://eccc.weizmann.ac.il/report/2017/111TR17-110 | On Axis-Parallel Tests for Tensor Product Codes |
Alessandro Chiesa,
Peter Manohar,
Igor Shinkar
https://eccc.weizmann.ac.il/report/2017/110Many low-degree tests examine the input function via its restrictions to random hyperplanes of a certain dimension. Examples include the line-vs-line (Arora, Sudan 2003), plane-vs-plane (Raz, Safra 1997), and cube-vs-cube (Bhangale, Dinur, Livni 2017) tests.
In this paper we study a test introduced by Ben-Sasson and Sudan in 2006 that only considers restrictions along axis-parallel hyperplanes. While such a test is necessarily "weaker", it works for a more general class of codes, namely tensor product codes. Moreover, axis-parallel tests play a key role in constructing LTCs with inverse polylogarithmic rate and short PCPs (Ben-Sasson, Sudan 2008; Meir 2010). We present two results on axis-parallel tests.
(1) Bivariate low-degree testing with low-agreement.
We prove an analogue of the Bivariate Low-Degree Testing Theorem of Polishchuk and Spielman (1994) in the low-agreement regime, albeit with much larger field size. Namely, for the 2-wise tensor product of the Reed--Solomon code, we prove that for sufficiently large fields, the 2-query variant of the axis-parallel line test (row-vs-column test) works for arbitrarily small agreement. Prior analyses of axis-parallel tests assumed high agreement, and no results for such tests in the low-agreement regime were known.
Our proof technique deviates significantly from that of Polishchuk and Spielman, which relies on algebraic methods such as Bézout's Theorem, and instead leverages a fundamental result in extremal graph theory by Kövári, Sós, and Turán. To our knowledge, this is the first time this result is used in the context of low-degree testing.
(2) Improved robustness for tensor product codes.
Robustness is a strengthening of local testability that underlies many applications. We prove that the axis-parallel hyperplane test for the m-wise tensor product of a linear code with block length n and distance d is Omega(d^m/n^m)-robust. This improves on a theorem of Viderman (2012) by a factor of 1/\poly(m). While the improvement is not large, we believe that our proof is a notable simplification compared to prior work.Thu, 22 Jun 2017 17:29:09 +0300https://eccc.weizmann.ac.il/report/2017/110TR17-109 | Does Looking Inside a Circuit Help? |
Valentine Kabanets,
Russell Impagliazzo,
Antonina Kolokolova,
Pierre McKenzie,
Shadab Romani
https://eccc.weizmann.ac.il/report/2017/109The Black-Box Hypothesis, introduced by Barak et al. (JACM, 2012), states that any property of boolean functions decided efficiently (e.g., in BPP) with inputs represented by circuits can also be decided efficiently in the black-box setting, where an algorithm is given an oracle access to the input function and an upper bound on its circuit size. If this hypothesis is true, then P$\neq$NP. We focus on the consequences of the hypothesis being false, showing that (under general conditions on the structure of a counterexample) it implies a non-trivial algorithm for Circuit-SAT. More specifically, we show that if there is a property $F$ of boolean functions such that $F$ has high sensitivity on some input function $f$ of subexponential circuit complexity (which is a sufficient condition for $F$ being a counterexample to the Black-Box Hypothesis), then Circuit-SAT is solvable by a subexponential-size circuit family. Moreover, if such a counterexample $F$ is symmetric, then Circuit-SAT is in P/poly. These results provide some evidence towards the conjecture (made in this paper) that the Black-Box Hypothesis is false if and only if Circuit-SAT is easy. Thu, 22 Jun 2017 03:24:33 +0300https://eccc.weizmann.ac.il/report/2017/109TR17-108 | Delegating Computation: Interactive Proofs for Muggles |
Shafi Goldwasser,
Yael Tauman Kalai,
Guy Rothblum
https://eccc.weizmann.ac.il/report/2017/108In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a ``muggle'' (Muggle: ``In the fiction of J.K. Rowling: a person who possesses no magical powers''; from the Oxford English Dictionary). The verifier should be super-efficient and run in nearly-linear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result's correctness in nearly-linear time (instead of running the entire computation itself).
Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original interactive proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers.
Our main technical theorem gives a public coin interactive proof for any language computable by a log-space uniform boolean circuit with depth d and input length n. The verifier runs in time (n poly(d,log(n))) and space O(log(n)), the communication complexity is poly(d,log(n)), and the prover runs in time poly(n). In particular, for languages computable by log-space uniform NC (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time (n polylog(n)) and space O(log(n)), and the communication complexity is polylog(n).
Using this theorem we make progress on several questions:
* We show how to construct 1-round computationally sound arguments with polylog communication for any log-space uniform NC computation. The verifier runs in quasi-linear time. This result uses a recent transformation of Kalai and Raz from public-coin interactive proofs to one-round arguments. The soundness of the argument system is based on the existence of a PIR scheme with polylog communication.
* Interactive proofs with public-coin, log-space, poly-time verifiers for all of P. This settles an open question regarding the expressive power of proof systems with such verifiers.
* Zero-knowledge interactive proofs with communication complexity that is quasi-linear in the witness length for any NP language verifiable in NC, based on the existence of one-way functions.
* Probabilistically checkable arguments (a model due to Kalai and Raz) of size polynomial in the witness length (rather than the instance length) for any NP language verifiable in NC, under computational assumptions.Mon, 19 Jun 2017 22:09:19 +0300https://eccc.weizmann.ac.il/report/2017/108TR17-107 | A Composition Theorem for Randomized Query complexity |
Swagato Sanyal,
Dmitry Gavinsky,
Rahul Jain,
Srijita Kundu,
Troy Lee,
Anurag Anshu,
Priyanka Mukhopadhyay,
Miklos Santha
https://eccc.weizmann.ac.il/report/2017/107Let the randomized query complexity of a relation for error probability $\epsilon$ be denoted by $\R_\epsilon(\cdot)$. We prove that for any relation $f \subseteq \{0,1\}^n \times \mathcal{R}$ and Boolean function $g:\{0,1\}^m \rightarrow \{0,1\}$, $\R_{1/3}(f\circ g^n) = \Omega(\R_{4/9}(f)\cdot\R_{1/2-1/n^4}(g))$, where $f \circ g^n$ is the relation obtained by composing $f$ and $g$. We also show that $\R_{1/3}\left(f \circ \left(g^\oplus_{O(\log n)}\right)^n\right)=\Omega(\log n \cdot \R_{4/9}(f) \cdot \R_{1/3}(g))$, where $g^\oplus_{O(\log n)}$ is the function obtained by composing the xor function on $O(\log n)$ bits and $g^t$.Sun, 18 Jun 2017 10:52:03 +0300https://eccc.weizmann.ac.il/report/2017/107TR17-106 | Representations of Monotone Boolean Functions by Linear Programs |
Mateus de Oliveira Oliveira,
Pavel Pudlak
https://eccc.weizmann.ac.il/report/2017/106We introduce the notion of monotone linear programming circuits (MLP circuits), a model of
computation for partial Boolean functions. Using this model, we prove the following results:
1. MLP circuits are superpolynomially stronger than monotone Boolean circuits.
2. MLP circuits are exponentially stronger than monotone span programs.
3. MLP circuits can be used to provide monotone feasibility interpolation theorems for Lovasz-Schrijver proof systems, and for mixed Lovasz-Schrijver proof systems.
4. The Lovasz-Schrijver proof system cannot be polynomially simulated by the cutting planes proof system. This is the first result showing a separation between these two proof systems.
Finally, we discuss connections between the problem of proving lower bounds on the size of MLPs and the problem of proving lower bounds on extended formulations of polytopes. Fri, 16 Jun 2017 15:30:39 +0300https://eccc.weizmann.ac.il/report/2017/106TR17-105 | Pseudo-Deterministic Proofs |
Dhiraj Holden,
Shafi Goldwasser,
Ofer Grossman
https://eccc.weizmann.ac.il/report/2017/105We introduce pseudo-deterministic interactive proofs (psdAM): interactive proof systems for search problems where
the verifier is guaranteed with high probability to output the same output on different executions.
As in the case with classical interactive proofs,
the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover.
We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms:
the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier.
Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems,
with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one.
We show the following results:
A natural and illustrative example of a search problem in psdAM is the language where given two isomorphic graphs $(G_0,G_1)$, the goal is to find an isomorphism $\phi$ from $G_0$ to $G_1$.
We will show a constant round interactive proof where on every pair of input graphs $(G_0,G_1)$, the verifier with high probability will output
a unique isomorphism $\phi$ from $G_0$ to $G_1$, although many isomorphisms may exist.
In contrast, we show that it is unlikely that psdAM proofs with constant rounds exist for NP-complete problems by showing that if any NP-complete problem has an psdAM protocol where the verifier outputs a unique witness with high probability, then the polynomial hierarchy collapses.
We show that for every problem in search-BPP, there exists a pseudo-deterministic MA protocol which succeeds on infinitely many input lengths, where the verifier takes subexponential time.
Finally, we consider non-deterministic log-space NL algorithms with canonical outputs, which we name pseudo-deterministic NL: on every input, for every non-deterministic choice of the algorithm, either the algorithm rejects or it outputs a canonical unique output. We show that every search problem in search-NL (solvable by a nondeterministic log-space algorithm), is in pseudo-deterministic NL.
We show that the class of pseudo-deterministic AM protocols equals the class of problems solvable by polynomial time search algorithms with oracle access to promise-$AM \cap coAM$, where queries to the oracle must be in the promise. We show similar results for pseudo-deterministic NP and pseudo-deterministic MA.
Thu, 15 Jun 2017 00:16:03 +0300https://eccc.weizmann.ac.il/report/2017/105TR17-104 | Local List Recovery of High-rate Tensor Codes & Applications |
Noga Ron-Zewi,
Brett Hemenway,
Mary Wootters
https://eccc.weizmann.ac.il/report/2017/104In this work, we give the first construction of {\em high-rate} locally list-recoverable codes. List-recovery has been an extremely useful building block in coding theory, and our motivation is to use these codes as such a building block.
In particular, our construction gives the first {\em capacity-achieving} locally list-decodable codes (over constant-sized alphabet); the first {\em capacity achieving} globally list-decodable codes with nearly linear time list decoding algorithm (once more, over constant-sized alphabet); and a randomized construction of binary codes on the Gilbert-Varshamov bound that can be uniquely decoded in near-linear-time, with higher rate than was previously known.
Our techniques are actually quite simple, and are inspired by an approach of Gopalan, Guruswami, and Raghavendra (Siam Journal on Computing, 2011) for list-decoding tensor codes. We show that tensor powers of (globally) list-recoverable codes are `approximately' locally list-recoverable, and that the `approximately' modifier may be removed by pre-encoding the message with a suitable locally decodable code. Instantiating this with known constructions
of high-rate globally list-recoverable codes
and high-rate locally decodable codes finishes the construction.Wed, 14 Jun 2017 11:44:09 +0300https://eccc.weizmann.ac.il/report/2017/104TR17-103 | Parameterized Property Testing of Functions |
Ramesh Krishnan S. Pallavoor,
Sofya Raskhodnikova,
Nithin Varma
https://eccc.weizmann.ac.il/report/2017/103We investigate the parameters in terms of which the complexity of sublinear-time algorithms should be expressed. Our goal is to find input parameters that are tailored to the combinatorics of the specific problem being studied and design algorithms that run faster when these parameters are small. This direction enables us to surpass the (worst-case) lower bounds, expressed in terms of the input size, for several problems. Our aim is to develop a similar level of understanding of the complexity of sublinear-time algorithms to the one that was enabled by research in parameterized complexity for classical algorithms.
Specifically, we focus on testing properties of functions. By parameterizing the query complexity in terms of the size $r$ of the image of the input function, we obtain testers for monotonicity and convexity of functions of the form $f:[n]\to \mathbb{R}$ with query complexity $O(\log r),$ with no dependence on $n$. The result for monotonicity circumvents the $\Omega(\log n)$ lower bound by Fischer (Inf. Comput., 2004) for this problem. We present several other parameterized testers, providing compelling evidence that expressing the query complexity of property testers in terms of the input size is not always the best choice.Mon, 12 Jun 2017 16:43:15 +0300https://eccc.weizmann.ac.il/report/2017/103TR17-102 | Overview of the doubly-efficient interactive proof systems of RRR |
Oded Goldreich
https://eccc.weizmann.ac.il/report/2017/102We provide an overview of the doubly-efficient interactive proof systems of Reingold, Rothblum, and Rothblum (STOC, 2016).
Recall that by their result, any set that is decidable in polynomial-time by an algorithm of space complexity $s(n)\leq n^{0.499}$, has a constant-round interactive proof system
in which the prover runs polynomial time and the verifier runs in time ${\widetilde O}(n)$.Fri, 09 Jun 2017 14:43:25 +0300https://eccc.weizmann.ac.il/report/2017/102