We prove that a random linear code over $\mathbb{F}_q$, with probability arbitrarily close to $1$, is list decodable at radius $1-1/q-\epsilon$ with list size $L=O(1/\epsilon^2)$ and rate $R=\Omega_q(\epsilon^2/(\log^3(1/\epsilon)))$. Up to the polylogarithmic factor in $1/\epsilon$ and constant factors depending on $q$, this matches the lower bound $L=\Omega_q(1/\epsilon^2)$ for the list size and upper bound $R=O_q(\epsilon^2)$ for the rate. Previously only existence (and not abundance) of such codes was known for the special case $q=2$ (Guruswami, HÃ¥stad, Sudan and Zuckerman, 2002).
In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature.
Finally, we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of $k$-sparse signals of length $N$. Specifically, we improve the number of samples from $O(k \log(N) \log^2(k) (\log k + \log\log N))$ to $O(k \log(N) \log^3(k))$. The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.