A one-way function is $d$-local if each of its outputs depends on at most $d$ input bits. In (Applebaum, Ishai, and Kushilevitz, FOCS 2004) it was shown that, under relatively mild assumptions, there exist $4$-local one-way functions (OWFs). This result is not far from optimal as it is not hard to show that there are no $2$-local OWFs. The gap was partially closed in (Applebaum, Ishai, and Kushilevitz, FOCS 2004) by showing that the existence of a $3$-local OWF is implied by the intractability of decoding a random linear code (or equivalently the hardness of learning parity with noise).
In this note we provide further evidence for the existence of $3$-local OWFs. We construct a $3$ local OWF based on the assumption that a random function of (arbitrarily large) constant locality is one-way. (A similar assumption was previously made by Goldreich, ECCC 2000.) Our proof consists of two steps: (1) We show that, under the above assumption, random local functions remain hard to invert even when some information on the preimage $x$ is leaked; and (2) Such ``robust'' local one-way functions can be converted to $3$-local one-way functions via a new construction of \emph{semi-private randomized encoding}. We believe that these results may be of independent interest.
A one-way function is $d$-local if each of its outputs depends on at most $d$ input bits. In (Applebaum, Ishai, and Kushilevitz, FOCS 2004) it was shown that, under relatively mild assumptions, there exist $4$-local one-way functions (OWFs). This result is not far from optimal as it is not hard to show that there are no $2$-local OWFs. The gap was partially closed in (Applebaum, Ishai, and Kushilevitz, FOCS 2004) by showing that the existence of a $3$-local OWF is implied by the intractability of decoding a random linear code (or equivalently the hardness of learning parity with noise).
In this note we provide further evidence for the existence of $3$-local OWFs. We construct a $3$ local OWF based on the assumption that a random function of (arbitrarily large) constant locality is one-way. (A similar assumption was previously made by Goldreich, ECCC 2000.) Our proof consists of two steps: (1) We show that, under the above assumption, random local functions remain hard to invert even when some information on the preimage $x$ is leaked; and (2) Such ``robust'' local one-way functions can be converted to $3$-local one-way functions via a new construction of \emph{semi-private randomized encoding}. We believe that these results may be of independent interest.