Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR18-065 | 9th November 2018 20:14

Near-Optimal Erasure List-Decodable Codes

RSS-Feed




Revision #1
Authors: Avraham Ben-Aroya, Dean Doron, Amnon Ta-Shma
Accepted on: 9th November 2018 20:14
Downloads: 697
Keywords: 


Abstract:

A code $\mathcal{C}$ is $(1-\tau,L)$ erasure list-decodable if for every codeword $w$, after erasing any $1-\tau$ fraction of the symbols of $w$,
the remaining $\tau$-fraction of its symbols have at most $L$ possible completions into codewords of $\mathcal{C}$.
Non-explicitly, there exist binary $(1-\tau,L)$ erasure list-decodable codes having rate $O(\tau)$ and tiny list-size $L=O(\log \frac{1}{\tau})$.
Achieving either of these parameters explicitly is a natural open problem and was brought up in several works (e.g., [GI02,Gur03,Gur04]). While partial progress on the problem has been achieved, no explicit construction up to this work achieved rate better than $\Omega(\tau^2)$ or list-size smaller than $\Omega(1/\tau)$ (for $\tau$ small enough). Furthermore, Guruswami showed that no linear code can have list-size small than $\Omega(1/\tau)$ [Gur03]. In this work we construct an explicit binary $(1-\tau,L)$ erasure list-decodable code having rate $\tau^{1+\gamma}$ (for any constant $\gamma>0$ and $\tau$ small enough) and list-size $poly(\log \frac{1}{\tau})$, answering simultaneously both questions, and exhibiting an explicit non-linear code that provably beats the best possible linear one.

The binary erasure list-decoding problem is equivalent to the construction of explicit, low-error, strong dispersers outputting one bit with minimal entropy-loss and seed-length. Specifically, such dispersers with error $\varepsilon$ have an unavoidable entropy-loss of $\log\log\frac{1}{\varepsilon}$ and seed-length at least $\log\frac{1}{\varepsilon}$. Similarly to the situation with erasure list-decodable codes, no explicit construction achieved seed-length better than $2\log\frac{1}{\varepsilon}$ or entropy-loss smaller than $2\log\frac{1}{\varepsilon}$, which are the best possible parameters for extractors. For every constant $\gamma>0$ and every small $\varepsilon$, we explicitly construct an $\varepsilon$-error one-bit strong disperser with near-optimal seed-length $(1+\gamma)\log\frac{1}{\varepsilon}$ and near-optimal entropy-loss $O(\log\log\frac{1}{\varepsilon})$.

The main ingredient in our construction is a new (and almost-optimal) unbalanced two-source extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length $n$ and tiny min-entropy $O(\log\log n)$ and the other source has length $O(\log n)$ and arbitrarily small constant min-entropy rate. When instantiated as a balanced two-source extractor, it improves upon Raz’s extractor [Raz05] in the constant error regime. The construction incorporates recent components and ideas from extractor theory with a delicate and novel analysis needed in order to solve dependency and error issues that prevented previous papers (such as [Li15, CZ16, Coh16]) from achieving the above results.



Changes to previous version:

Revisions in the introduction section.


Paper:

TR18-065 | 8th April 2018 17:49

Near-Optimal Strong Dispersers, Erasure List-Decodable Codes and Friends


Abstract:

A code $\mathcal{C}$ is $(1-\tau,L)$ erasure list-decodable if for every codeword $w$, after erasing any $1-\tau$ fraction of the symbols of $w$,
the remaining $\tau$-fraction of its symbols have at most $L$ possible completions into codewords of $\mathcal{C}$.
Non-explicitly, there exist binary $(1-\tau,L)$ erasure list-decodable codes having rate $O(\tau)$ and tiny list-size $L=O(\log \frac{1}{\tau})$.
Achieving either of these parameters explicitly is a natural open problem and was brought up in several works (e.g., [GI02,Gur03,Gur04]). While partial progress on the problem has been achieved, no explicit construction up to this work achieved rate better than $\Omega(\tau^2)$ or list-size smaller than $\Omega(1/\tau)$ (for $\tau$ small enough). Furthermore, Guruswami showed that no linear code can have list-size small than $\Omega(1/\tau)$ [Gur03]. In this work we construct an explicit binary $(1-\tau,L)$ erasure list-decodable code having rate $\tau^{1+\gamma}$ (for any constant $\gamma>0$ and $\tau$ small enough) and list-size $poly(\log \frac{1}{\tau})$, answering simultaneously both questions, and exhibiting an explicit non-linear code that provably beats the best possible linear one.

The binary erasure list-decoding problem is equivalent to the construction of explicit, low-error, strong dispersers outputting one bit with minimal entropy-loss and seed-length. Specifically, such dispersers with error $\varepsilon$ have an unavoidable entropy-loss of $\log\log\frac{1}{\varepsilon}$ and seed-length at least $\log\frac{1}{\varepsilon}$. Similarly to the situation with erasure list-decodable codes, no explicit construction achieved seed-length better than $2\log\frac{1}{\varepsilon}$ or entropy-loss smaller than $2\log\frac{1}{\varepsilon}$, which are the best possible parameters for extractors. For every constant $\gamma>0$ and every small $\varepsilon$, we explicitly construct an $\varepsilon$-error one-bit strong disperser with near-optimal seed-length $(1+\gamma)\log\frac{1}{\varepsilon}$ and near-optimal entropy-loss $O(\log\log\frac{1}{\varepsilon})$.

The main ingredient in our construction is a new (and almost-optimal) unbalanced two-source extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length $n$ and tiny min-entropy $O(\log\log n)$ and the other source has length $O(\log n)$ and arbitrarily small constant min-entropy rate. The construction incorporates recent components and ideas from extractor theory with a delicate and novel analysis needed in order to solve dependency and error issues.



ISSN 1433-8092 | Imprint