We construct a family of binary codes of relative distance $\frac{1}{2}-\varepsilon$ and rate $\varepsilon^{2} \cdot 2^{-\log^{\alpha}(1/\varepsilon)}$ for $\alpha \approx \frac{1}{2}$ that are decodable, probabilistically, in near linear time. This improves upon the rate of the state-of-the-art near-linear time decoding near the GV bound due to Jeronimo, Srivastava, and Tulsiani, who gave a randomized decoding of Ta-Shma codes with $\alpha \approx \frac{5}{6}$ [T17,JST21].
Each code in our family can be constructed in probabilistic polynomial time, or deterministic polynomial time given sufficiently good explicit $3$-uniform hypergraphs.
Our construction is based on a new graph-based bias amplification method. While previous works start with some base code of relative distance $\frac{1}{2} \varepsilon_0$ for $\varepsilon_0 \gg \varepsilon$ and amplify the distance to $\frac{1}{2}-\varepsilon$ by walking on an expander, or on a carefully tailored product of expanders, we walk over very sparse, highly mixing, hypergraphs.
Study of such hypergraphs further offers an avenue toward achieving rate $\widetilde{\Omega}(\varepsilon^2)$.
For our unique- and list-decoding algorithms, we employ the framework developed in [JST21].
Fixed typos.
We construct a family of binary codes of relative distance $\frac{1}{2}-\varepsilon$ and rate $\varepsilon^{2} \cdot 2^{-\log^{\alpha}(1/\varepsilon)}$ for $\alpha \approx \frac{1}{2}$ that are decodable, probabilistically, in near linear time. This improves upon the rate of the state-of-the-art near-linear time decoding near the GV bound due to Jeronimo, Srivastava, and Tulsiani, who gave a randomized decoding of Ta-Shma codes with $\alpha \approx \frac{5}{6}$ [T17,JST21].
Each code in our family can be constructed in probabilistic polynomial time, or deterministic polynomial time given sufficiently good explicit $3$-uniform hypergraphs.
Our construction is based on a new graph-based bias amplification method. While previous works start with some base code of relative distance $\frac{1}{2} \varepsilon_0$ for $\varepsilon_0 \gg \varepsilon$ and amplify the distance to $\frac{1}{2}-\varepsilon$ by walking on an expander, or on a carefully tailored product of expanders, we walk over very sparse, highly mixing, hypergraphs.
Study of such hypergraphs further offers an avenue toward achieving rate $\widetilde{\Omega}(\varepsilon^2)$.
For our unique- and list-decoding algorithms, we employ the framework developed in [JST21].