ECCC-Report TR17-120https://eccc.weizmann.ac.il/report/2017/120Comments and Revisions published for TR17-120en-usThu, 10 Aug 2017 03:10:31 +0300
Revision 1
| Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions |
Paul Beame,
Shayan Oveis Gharan,
Xin Yang
https://eccc.weizmann.ac.il/report/2017/120#revision1We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices.
As applications that follow from our new technique, we show that any algorithm that learns $m$-variate homogeneous polynomial functions of degree at most $d$ over $F_2$ from evaluations on randomly chosen inputs either requires space $\Omega(mn)$ or $2^{\Omega(m)}$ time where $n=m^{\Theta(d)}$ is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.
Thu, 10 Aug 2017 03:10:31 +0300https://eccc.weizmann.ac.il/report/2017/120#revision1
Paper TR17-120
| Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions |
Paul Beame,
Shayan Oveis Gharan,
Xin Yang
https://eccc.weizmann.ac.il/report/2017/120We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices.
As applications that follow from our new technique, we show that any algorithm that learns $m$-variate homogeneous polynomial functions of degree at most $d$ over $F_2$ from evaluations on randomly chosen inputs either requires space $\Omega(mn)$ or $2^{\Omega(m)}$ time where $n=m^{\Theta(d)}$ is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.
Mon, 31 Jul 2017 04:13:26 +0300https://eccc.weizmann.ac.il/report/2017/120