We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace (where sparsity refers to the number of nonzero entries). Formally we show that it is NP-Hard (under randomized reductions) to approximate the sparsest vector in a subspace within any constant factor. By simple tensoring the inapproximability factor can be made almost polynomial assuming NP cannot be solved in randomized quasipolynomial time. We recover as a corollary state of the art inapproximability factors for the shortest vector problem (SVP), a foundational problem in lattice based cryptography. Our proof is surprisingly simple, bypassing even the PCP theorem.
We are inspired by the homogenization framework developed over the course of several papers studying the inapproximability of minimum distance problems in integer lattices and error correcting codes. Our proof uses a combination of (a) product testing via symmetric tensor codes and (b) encoding points of the hypercube as cosets of a random code in higher dimensional space in order to embed an instance of non-homogeneous quadratic equations into the sparsest vector problem. (a) is inspired by Austrin and Khot's simplified proof of hardness of minimum distance of a code over finite fields, and (b) is inspired by Micciancio's partial derandomization of the hardness of SVP.
Our reduction involves the challenge of performing (a) over the reals. We prove that symmetric tensoring of the kernel of a +1/-1 random matrix furnishes an adequate product test (while still allowing (b)). The proof exposes a connection to Littlewood-Offord theory and relies on a powerful anticoncentration result of Rudelson and Vershynin.
Our main motivation in this work is the development of inapproximability techniques for problems over the reals. Analytic variants of sparsest vector have connections to small set expansion, quantum separability and polynomial maximization over convex sets, all of which cause similar barriers to inapproximability. The approach we develop could lead to progress on the hardness of some of these problems.
Minor edits
We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace (where sparsity refers to the number of nonzero entries). Formally we show that it is NP-Hard (under randomized reductions) to approximate the sparsest vector in a subspace within any constant factor. By simple tensoring the inapproximability factor can be made almost polynomial assuming NP cannot be solved in randomized quasipolynomial time. We recover as a corollary state of the art inapproximability factors for the shortest vector problem (SVP), a foundational problem in lattice based cryptography. Our proof is surprisingly simple, bypassing even the PCP theorem.
We are inspired by the homogenization framework developed over the course of several papers studying the inapproximability of minimum distance problems in integer lattices and error correcting codes. Our proof uses a combination of (a) product testing via symmetric tensor codes and (b) encoding points of the hypercube as cosets of a random code in higher dimensional space in order to embed an instance of non-homogeneous quadratic equations into the sparsest vector problem. (a) is inspired by Austrin and Khot's simplified proof of hardness of minimum distance of a code over finite fields, and (b) is inspired by Micciancio's partial derandomization of the hardness of SVP.
Our reduction involves the challenge of performing (a) over the reals. We prove that symmetric tensoring of the kernel of a +1/-1 random matrix furnishes an adequate product test (while still allowing (b)). The proof exposes a connection to Littlewood-Offord theory and relies on a powerful anticoncentration result of Rudelson and Vershynin.
Our main motivation in this work is the development of inapproximability techniques for problems over the reals. Analytic variants of sparsest vector have connections to small set expansion, quantum separability and polynomial maximization over convex sets, all of which cause similar barriers to inapproximability. The approach we develop could lead to progress on the hardness of some of these problems.