Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR17-120 | 10th August 2017 03:10

Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions

RSS-Feed




Revision #1
Authors: Paul Beame, Shayan Oveis Gharan, Xin Yang
Accepted on: 10th August 2017 03:10
Downloads: 766
Keywords: 


Abstract:

We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices.

As applications that follow from our new technique, we show that any algorithm that learns $m$-variate homogeneous polynomial functions of degree at most $d$ over $F_2$ from evaluations on randomly chosen inputs either requires space $\Omega(mn)$ or $2^{\Omega(m)}$ time where $n=m^{\Theta(d)}$ is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.


Paper:

TR17-120 | 31st July 2017 04:13

Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions





TR17-120
Authors: Paul Beame, Shayan Oveis Gharan, Xin Yang
Publication: 31st July 2017 04:13
Downloads: 803
Keywords: 


Abstract:

We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices.

As applications that follow from our new technique, we show that any algorithm that learns $m$-variate homogeneous polynomial functions of degree at most $d$ over $F_2$ from evaluations on randomly chosen inputs either requires space $\Omega(mn)$ or $2^{\Omega(m)}$ time where $n=m^{\Theta(d)}$ is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.



ISSN 1433-8092 | Imprint