Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR23-018 | 1st March 2023 05:00

Memory-Sample Lower Bounds for Learning with Classical-Quantum Hybrid Memory

RSS-Feed




TR23-018
Authors: Qipeng Liu, Ran Raz, Wei Zhan
Publication: 2nd March 2023 06:17
Downloads: 280
Keywords: 


Abstract:

In a work by Raz (J. ACM and FOCS 16), it was proved that any algorithm for parity learning on $n$ bits requires either $\Omega(n^2)$ bits of classical memory or an exponential number (in~$n$) of random samples. A line of recent works continued that research direction and showed that for a large collection of classical learning tasks, either super-linear classical memory size or super-polynomially many samples are needed. All these works consider learning algorithms as classical branching programs, which perform classical computation within bounded memory.

However, these results do not capture all physical computational models, remarkably, quantum computers and the use of quantum memory. It leaves the possibility that a small piece of quantum memory could significantly reduce the need for classical memory or samples and thus completely change the nature of the classical learning task. Despite the recent research on the necessity of quantum memory for intrinsic quantum learning problems like shadow tomography and purity testing, the role of quantum memory in classical learning tasks remains obscure.

In this work, we study classical learning tasks in the presence of quantum memory. We prove that any quantum algorithm with both, classical memory and quantum memory, for parity learning on $n$ bits, requires either $\Omega(n^2)$ bits of classical memory or $\Omega(n)$ bits of quantum memory or an exponential number of samples. In other words, the memory-sample lower bound for parity learning remains qualitatively the same, even if the learning algorithm can use, in addition to the classical memory, a quantum memory of size $c n$ (for some constant $c>0$).

Our result is more general and applies to many other classical learning tasks. Following previous works, we represent by the matrix $M: A \times X \to \{-1,1\}$ the following learning task. An unknown $x$ is sampled uniformly at random from a concept class $X$, and a learning algorithm tries to uncover $x$ by seeing streaming of random samples $(a_i, b_i = M(a_i, x))$ where for every $i$, $a_i\in A$ is chosen uniformly at random. Assume that $k,\ell,r$ are integers such that any submatrix of $M$ of at least $2^{-k}\cdot|A|$ rows and at least $2^{-\ell}\cdot|X|$ columns, has a bias of at most $2^{-r}$. We prove that any algorithm with classical and quantum hybrid memory for the learning problem corresponding to $M$ needs either (1) $\Omega(k \cdot \ell)$ bits of classical memory, or (2) $\Omega(r)$ qubits of quantum memory, or (3) $2^{\Omega(r)}$ random samples, to achieve a success probability at least $2^{-O(r)}$.

Our results refute the possibility that a small amount of quantum memory significantly reduces the size of classical memory needed for efficient learning on these problems. Our results also imply improved security of several existing cryptographical protocols in the bounded-storage model (protocols that are based on parity learning on $n$ bits), proving that security holds even in the presence of a quantum adversary with at most $c n^2$ bits of classical memory and $c n$ bits of quantum memory (for some constant $c>0$).



ISSN 1433-8092 | Imprint