We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The ... more >>>
A black-box (BB) reduction is a central proof technique in theoretical computer science. However, the limitations on BB reductions have been revealed for several decades, and the series of previous work gives strong evidence that we should avoid a nonadaptive BB reduction to base cryptography on NP-hardness (e.g., Akavia et ... more >>>
A version of time-bounded Kolmogorov complexity, denoted KT, has received attention in the past several years, due to its close connection to circuit complexity and to the Minimum Circuit Size Problem MCSP. Essentially all results about the complexity of MCSP hold also for MKTP (the problem of computing the KT ... more >>>
We give a simplified proof of Hirahara's STOC'21 result showing that $DistPH \subseteq AvgP$ would imply $PH \subseteq DTIME[2^{O(n/\log n)}]$. The argument relies on a proof of the new result: Symmetry of Information for time-bounded Kolmogorov complexity under the assumption that $NP$ is easy on average, which is interesting in ... more >>>
We present a new framework for designing worst-case to average-case reductions. For a large class of problems, it provides an explicit transformation of algorithms running in time $T$ that are only correct on a small (subconstant) fraction of their inputs into algorithms running in time $\widetilde{O}(T)$ that are correct on ... more >>>