ECCC - Reportshttps://example.com/Latest Reports published at https://eccc.weizmann.ac.ilen-usMon, 13 Aug 2018 01:54:47 +0300TR18-141 | Local Decodability of the Burrows-Wheeler Transform |
Omri Weinstein,
Sandip Sinha
https://eccc.weizmann.ac.il/report/2018/141The Burrows-Wheeler Transform (BWT) is among the most influential discoveries in text compression and DNA storage. It is a \emph{reversible} preprocessing step that rearranges an $n$-letter string into runs of identical characters (by exploiting context regularities), resulting in highly compressible strings, and is the basis for the ubiquitous \texttt{bzip} program. Alas, the decoding process of BWT is inherently sequential and requires $\Omega(n)$ time even to retrieve a \emph{single} character.
We study the succinct data structure problem of locally decoding short substrings of a given text under its \emph{compressed} BWT, i.e., with small redundancy $r$ over the \emph{Move-To-Front} based (\texttt{bzip}) compression. The celebrated BWT-based FM-index (FOCS '00), and other related literature, gravitate toward a tradeoff of $r=\tilde{O}(n/\sqrt{t})$ bits, when a single character is to be decoded in $O(t)$ time. We give a near-quadratic improvement $r=\tilde{O}(n\cdot \lg t/t)$. As a by-product, we obtain an \emph{exponential} (in $t$) improvement on the redundancy of the FM-index for counting pattern-matches on compressed text. In the interesting regime where the text compresses to $n^{1-o(1)}$ bits, these results provide an $\exp(t)$ \emph{overall} space reduction. For the local decoding problem, we also prove an $\Omega(n/t^2)$ cell-probe lower bound for ``symmetric" data structures.
We achieve our main result by designing a compressed Rank (partial-sums) data structure over BWT. The key component is a locally-decodable Move-to-Front (MTF) code: with only $O(1)$ extra bits per block of length $n^{\Omega(1)}$, the decoding time of a single character can be decreased from $\Omega(n)$ to $O(\lg n)$. This result is of independent interest in algorithmic information theory. Mon, 13 Aug 2018 01:54:47 +0300https://eccc.weizmann.ac.il/report/2018/141TR18-140 | A Lower Bound for Adaptively-Secure Collective Coin-Flipping Protocols |
Yael Tauman Kalai,
Ilan Komargodski,
Ran Raz
https://eccc.weizmann.ac.il/report/2018/140In 1985, Ben-Or and Linial (Advances in Computing Research '89) introduced the collective coin-flipping problem, where $n$ parties communicate via a single broadcast channel and wish to generate a common random bit in the presence of adaptive Byzantine corruptions. In this model, the adversary can decide to corrupt a party in the course of the protocol as a function of the messages seen so far. They showed that the majority protocol, in which each player sends a random bit and the output is the majority value, tolerates $O(\sqrt n)$ adaptive corruptions. They conjectured that this is optimal for such adversaries.
We prove that the majority protocol is optimal (up to a poly-logarithmic factor) among all protocols in which each party sends a single, possibly long, message.
Previously, such a lower bound was known for protocols in which parties are allowed to send only a single bit (Lichtenstein, Linial, and Saks, Combinatorica '89), or for symmetric protocols (Goldwasser, Kalai, and Park, ICALP '15).Sat, 11 Aug 2018 16:10:17 +0300https://eccc.weizmann.ac.il/report/2018/140TR18-139 | Hardness Magnification for Natural Problems |
Igor Carboni Oliveira,
Rahul Santhanam
https://eccc.weizmann.ac.il/report/2018/139We show that for several natural problems of interest, complexity lower bounds that are barely non-trivial imply super-polynomial or even exponential lower bounds in strong computational models. We term this phenomenon "hardness magnification". Our examples of hardness magnification include:
1. Let MCSP$[s]$ be the decision problem whose YES instances are truth tables of functions with circuit complexity at most $s(n)$. We show that if MCSP$[2^{\sqrt{n}}]$ cannot be solved on average with zero error by formulas of linear (or even sub-linear) size, then NP does not have polynomial-size formulas. In contrast, Hirahara and Santhanam (2017) showed that MCSP$[2^{\sqrt{n}}]$ cannot be solved in the worst case by formulas of nearly quadratic size.
2. If there is a $c > 0$ such that for each positive integer $d$ there is an $\varepsilon > 0$ such that the problem of checking if an $n$-vertex graph in the adjacency matrix representation has a vertex cover of size $(\log n)^c$ cannot be solved by depth-$d$ AC$^0$ circuits of size $m^{1+\varepsilon}$, where $m = \Theta(n^2)$, then NP does not have polynomial-size formulas.
3. Let $(\alpha, \beta)$-MCSP$[s]$ be the promise problem whose YES instances are truth tables of functions that are $\alpha$-approximable by a circuit of size $s(n)$, and whose NO instances are truth tables of functions that are not $\beta$-approximable by a circuit of size $s(n)$. We show that for any non-trivial choice of the parameters $\alpha$ and $\beta$, if $(\alpha, \beta)$-MCSP$[2^{\sqrt{n}}]$ cannot be solved by randomized algorithms with random access to the input running in sublinear time, then NP is not contained in BPP.
4. If for each probabilistic quasi-linear time machine $M$ using poly-logarithmic many random bits that is claimed to solve Satisfiability, there is a deterministic polynomial-time machine that on infinitely many input lengths $n$ either identifies a satisfiable instance of bitlength $n$ on which $M$ does not accept with high probability or an unsatisfiable instance of bitlength $n$ on which $M$ does not reject with high probability, then NEXP is not contained in BPP.
5. Given functions $s,c \colon N \rightarrow N$ where $s \geq c$, let MKtP$[c,s]$ be the promise problem whose YES instances are strings of Kt complexity (Levin, 1984) at most $c(N)$ and NO instances are strings of Kt complexity greater than $s(N)$. We show that if there is a $\delta > 0$ such that for each $\varepsilon > 0$, MKtP$[N^{\varepsilon}, N^{\varepsilon} + 5 \log(N)]$ requires Boolean circuits of size $N^{1+\delta}$, then EXP is not contained in SIZE(poly).
For each of the cases of magnification above, we observe that standard hardness assumptions imply much stronger lower bounds for these problems than we require for magnification.
We further explore magnification as an avenue to proving strong lower bounds, and argue that magnification circumvents the "natural proofs" barrier of Razborov and Rudich (1997). Examining some standard proof techniques, we find that they fall just short of proving lower bounds via magnification. As one of our main open problems, we ask whether there are other meta-mathematical barriers to proving lower bounds that rule out approaches combining magnification with known techniques.Fri, 10 Aug 2018 22:21:11 +0300https://eccc.weizmann.ac.il/report/2018/139TR18-138 | Non-black-box Worst-case to Average-case Reductions within NP |
Shuichi Hirahara
https://eccc.weizmann.ac.il/report/2018/138There are significant obstacles to establishing an equivalence between the worst-case and average-case hardness of NP: Several results suggest that black-box worst-case to average-case reductions are not likely to be used for reducing any worst-case problem outside coNP to a distributional NP problem.
This paper overcomes the barrier. We present the first non-black-box worst-case to average-case reduction from a problem outside coNP (unless Random 3SAT is easy for coNP algorithms) to a distributional NP problem. Specifically, we consider the minimum time-bounded Kolmogorov complexity problem (MINKT), and prove that there exists a zero-error randomized polynomial-time algorithm approximating the minimum time-bounded Kolmogorov complexity $k$ within an additive error $\widetilde{O}(\sqrt{k})$ if its average-case version admits an errorless heuristic polynomial-time algorithm. (The converse direction also holds under a plausible derandomization assumption.) We also show that, given a truth table of size $2^n$, approximating the minimum circuit size within a factor of $2^{(1 - \epsilon) n}$ is in BPP for some constant $\epsilon > 0$ if and only if its average-case version is easy.
Based on our results, we propose a research program for excluding Heuristica, i.e., establishing an equivalence between the worst-case and average-case hardness of NP through the lens of MINKT or the Minimum Circuit Size Problem (MCSP).
Fri, 10 Aug 2018 13:07:32 +0300https://eccc.weizmann.ac.il/report/2018/138TR18-137 | Quantum Lower Bound for Approximate Counting Via Laurent Polynomials |
Scott Aaronson
https://eccc.weizmann.ac.il/report/2018/137We consider the following problem: estimate the size of a nonempty set $S\subseteq\left[ N\right] $, given both quantum queries to a membership oracle for $S$, and a device that generates equal superpositions $\left\vert S\right\rangle $ over $S$ elements. We show that, if $\left\vert S\right\vert $ is neither too large nor too small, then approximate counting with these resources is still quantumly hard. More precisely, any quantum algorithm needs either $\Omega\left( \sqrt{N/\left\vert S\right\vert }\right) $ queries or else $\Omega\left( \min\left\{ \left\vert S\right\vert ^{1/4},\sqrt{N/\left\vert S\right\vert }\right\} \right)$ copies of $\left\vert S\right\rangle $. This means that, in the black-box setting, quantum sampling does not imply approximate counting. The proof uses a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents.Tue, 07 Aug 2018 18:13:21 +0300https://eccc.weizmann.ac.il/report/2018/137TR18-136 | List Decoding with Double Samplers |
Irit Dinur,
Tali Kaufman,
Inbal Livni Navon,
Amnon Ta-Shma,
Prahladh Harsha
https://eccc.weizmann.ac.il/report/2018/136We develop the notion of double samplers, first introduced by Dinur and Kaufman [Proc. 58th FOCS, 2017], which are samplers with additional combinatorial properties, and whose existence we prove using high dimensional expanders.
We show how double samplers give a generic way of amplifying distance in a way that enables efficient list-decoding. There are many error correcting code constructions that achieve large distance by starting with a base code $C$ with moderate distance, and then amplifying the distance using a sampler, e.g., the ABNNR code construction [IEEE Trans. Inform. Theory, 38(2):509--516, 1992.]. We show that if the sampler is part of a larger double sampler then the construction has an efficient list-decoding algorithm and the list decoding algorithm is oblivious to the base code $C$ (i.e., it runs the unique decoder for $C$ in a black box way).
Our list-decoding algorithm works as follows: it uses a local voting scheme from which it constructs a unique games constraint graph. The constraint graph is an expander, so we can solve unique games efficiently. These solutions are the output of the list decoder. This is a novel use of a unique games algorithm as a subroutine in a decoding procedure, as opposed to the more common situation in which unique games are used for demonstrating hardness results.
Double samplers and high dimensional expanders are akin to pseudorandom objects in their utility, but they greatly exceed random objects in their combinatorial properties. We believe that these objects hold significant potential for coding theoretic constructions and view this work as demonstrating the power of double samplers in this context.Wed, 01 Aug 2018 08:16:05 +0300https://eccc.weizmann.ac.il/report/2018/136TR18-135 | Variants of Homomorphism Polynomials Complete for Algebraic Complexity Classes |
Prasad Chaugule,
Nutan Limaye,
Aditya Varre
https://eccc.weizmann.ac.il/report/2018/135We present polynomial families complete for the well-studied algebraic complexity classes VF, VBP, VP, and VNP. The polynomial families are based on the homomorphism polynomials studied in the recent works of Durand et al. (2014) and Mahajan et al. (2016). We consider three different variants of graph homomorphisms, namely injective homomorphisms, directed homomorphisms and injective directed homomorphisms and obtain polynomial families complete for VF, VBP, VP, and VNP under each one of these. The polynomial families have the following properties:
* The polynomial families complete for VF, VBP, and VP are model independent, i.e. they do not use a particular instance of a formula, ABP or circuit for characterising VF, VBP, or VP, respectively.
* All the polynomial families are hard under $p$-projections.Tue, 31 Jul 2018 21:02:42 +0300https://eccc.weizmann.ac.il/report/2018/135TR18-134 | Cosystolic Expanders over any Abelian Group |
David Mass,
Tali Kaufman
https://eccc.weizmann.ac.il/report/2018/134In this work we show a general reduction from high dimensional complexes to their links based on the spectral properties of the links. We use this reduction to show that if a certain property is testable in the links, then it is also testable in the complex. In particular, we show that expansion of the links of a complex over any abelian group implies that the complex is an expander \emph{over the same group}.
Previous works studied the expansion properties of complexes from their links, but their proofs were tailored for the field $\mathbb{F}_2$. They showed that the combination of spectral and topological properties of the links of a complex implies its expansion over the field $\mathbb{F}_2$. We show here a general reduction based \emph{only} on the spectral properties of the links.
We then show that all the links of Ramanujan complexes (which are called spherical buildings) are expanders over any abelian group. For this we generalize the result of~\cite{LMM16}, who showed that spherical buildings are coboundary expanders over $\mathbb{F}_2$. Combined with our reduction, this implies the existence of bounded degree cosystolic expanders over any abelian group.Mon, 30 Jul 2018 15:29:58 +0300https://eccc.weizmann.ac.il/report/2018/134TR18-133 | Constant-error pseudorandomness proofs from hardness require majority |
Emanuele Viola
https://eccc.weizmann.ac.il/report/2018/133Research in the 80's and 90's showed how to construct a pseudorandom
generator from a function that is hard to compute on more than $99\%$
of the inputs. A more recent line of works showed however that if
the generator has small error, then the proof of correctness cannot
be implemented in subclasses of TC$^{0}$, and hence the construction
cannot be applied to the known hardness results. This paper considers
a typical class of pseudorandom generator constructions, and proves
an analogous result for the case of large error.
Thu, 26 Jul 2018 20:10:00 +0300https://eccc.weizmann.ac.il/report/2018/133TR18-132 | Near-optimal Bootstrapping of Hitting Sets for Algebraic Circuits |
Mrinal Kumar,
Ramprasad Saptharishi,
Anamay Tengse
https://eccc.weizmann.ac.il/report/2018/132 The classical lemma of Ore-DeMillo-Lipton-Schwartz-Zippel states that any nonzero polynomial $f(x_1,\ldots, x_n)$ of degree at most $s$ will evaluate to a nonzero value at some point on a grid $S^n \subseteq \mathbb{F}^n$ with $|S| > s$. Thus, there is a deterministic polynomial identity test (PIT) for all degree-$s$ size-$s$ algebraic circuits in $n$ variables that runs in time $\mathrm{poly}(s) \cdot (s+1)^n$.
In a surprising recent result, Agrawal, Ghosh and Saxena (STOC 2018) showed any deterministic blackbox PIT algorithm for degree-$s$, size-$s$, $n$-variate circuits with running time as bad as $s^{n^{0.5 - \delta}}\mathrm{Huge}(n)$, where $\delta > 0$ and $\mathrm{Huge}(n)$ is an arbitrary function, can be used to construct blackbox PIT algorithms for degree-$s$ size $s$ circuits with running time $s^{\exp \circ \exp (O(\log ^\ast s))}$.
The authors asked if a similar conclusion followed if their hypothesis was weakened to having deterministic PIT with running time $s^{o(n)}\cdot \mathrm{Huge}(n)$. In this paper, we answer their question in the affirmative. We show that, given a deterministic blackbox PIT that runs in time $s^{o(n)}\cdot \mathrm{Huge}(n)$ for all degree-$s$ size-$s$ algebraic circuits over $n$ variables, we can obtain a deterministic blackbox PIT that runs in time $s^{\exp \circ \exp(O(\log^{*}s))}$ for all degree-$s$ size-$s$ algebraic circuits over $n$ variables. In other words, any blackbox PIT with just a slightly non-trivial exponent of $s$ compared to the trivial $s^{O(n)}$ test can be used to give a nearly polynomial time blackbox PIT algorithm.
Sun, 22 Jul 2018 16:05:53 +0300https://eccc.weizmann.ac.il/report/2018/132