We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $1/2-1/poly(n)$. Thus we can only show that ``very hard'' LPN is harder than some ``very mildly hard'' worst case problem.
Specifically, we consider the $(n,m,w)$-nearest codeword problem ($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linear code in $m$ dimensions and rank $n$, and a target vector which is very close to the code (Hamming distance at most $w$), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $1/2-1/poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that $(n,m,w)$-NCP with the aforementioned parameters lies in the complexity class $Search-BPP^{SZK}$ (i.e.\ reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $NP$-hard. We then show that LPN with very low noise rate $\log^2(n)/n$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $BPP^{SZK}$).