Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR18-065 | 8th April 2018 17:49

Near-Optimal Strong Dispersers, Erasure List-Decodable Codes and Friends

RSS-Feed

Abstract:

A code $\mathcal{C}$ is $(1-\tau,L)$ erasure list-decodable if for every codeword $w$, after erasing any $1-\tau$ fraction of the symbols of $w$,
the remaining $\tau$-fraction of its symbols have at most $L$ possible completions into codewords of $\mathcal{C}$.
Non-explicitly, there exist binary $(1-\tau,L)$ erasure list-decodable codes having rate $O(\tau)$ and tiny list-size $L=O(\log \frac{1}{\tau})$.
Achieving either of these parameters explicitly is a natural open problem and was brought up in several works (e.g., [GI02,Gur03,Gur04]). While partial progress on the problem has been achieved, no explicit construction up to this work achieved rate better than $\Omega(\tau^2)$ or list-size smaller than $\Omega(1/\tau)$ (for $\tau$ small enough). Furthermore, Guruswami showed that no linear code can have list-size small than $\Omega(1/\tau)$ [Gur03]. In this work we construct an explicit binary $(1-\tau,L)$ erasure list-decodable code having rate $\tau^{1+\gamma}$ (for any constant $\gamma>0$ and $\tau$ small enough) and list-size $poly(\log \frac{1}{\tau})$, answering simultaneously both questions, and exhibiting an explicit non-linear code that provably beats the best possible linear one.

The binary erasure list-decoding problem is equivalent to the construction of explicit, low-error, strong dispersers outputting one bit with minimal entropy-loss and seed-length. Specifically, such dispersers with error $\varepsilon$ have an unavoidable entropy-loss of $\log\log\frac{1}{\varepsilon}$ and seed-length at least $\log\frac{1}{\varepsilon}$. Similarly to the situation with erasure list-decodable codes, no explicit construction achieved seed-length better than $2\log\frac{1}{\varepsilon}$ or entropy-loss smaller than $2\log\frac{1}{\varepsilon}$, which are the best possible parameters for extractors. For every constant $\gamma>0$ and every small $\varepsilon$, we explicitly construct an $\varepsilon$-error one-bit strong disperser with near-optimal seed-length $(1+\gamma)\log\frac{1}{\varepsilon}$ and near-optimal entropy-loss $O(\log\log\frac{1}{\varepsilon})$.

The main ingredient in our construction is a new (and almost-optimal) unbalanced two-source extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length $n$ and tiny min-entropy $O(\log\log n)$ and the other source has length $O(\log n)$ and arbitrarily small constant min-entropy rate. The construction incorporates recent components and ideas from extractor theory with a delicate and novel analysis needed in order to solve dependency and error issues.



ISSN 1433-8092 | Imprint