We give the first representation-independent hardness results for
PAC learning intersections of halfspaces, a central concept class
in computational learning theory. Our hardness results are derived
from two public-key cryptosystems due to Regev, which are based on the
worst-case hardness of well-studied lattice problems. Specifically, we
prove that a polynomial-time algorithm for PAC learning intersections of
$n^{\epsilon}$ halfspaces (for a constant $\epsilon>0$) in $n$ dimensions
would yield a polynomial-time solution to $\tilde O(n^{1.5})$-unique shortest vector problem. We also prove that PAC learning intersections of $n^{\epsilon}$ low-weight halfspaces would yield a polynomial-time quantum solution to $\tilde O(n^{1.5})$-shortest vector problem and $\tilde O(n^{1.5})$-shortest independent vector problem. By making stronger assumptions about the hardness of these lattice problems, we show that there is no
polynomial-time algorithm for learning intersections of $\log^c n$ halfspaces in $n$ dimensions, for $c>0$ sufficiently large. Our approach also yields the first representation-independent hardness results for learning polynomial-size depth-$2$ neural networks and polynomial-size depth-$3$ arithmetic circuits.