ECCC-Report TR21-179https://eccc.weizmann.ac.il/report/2021/179Comments and Revisions published for TR21-179en-usMon, 20 Dec 2021 10:42:28 +0200
Comment 1
| Warning: The claims of TR21-179 were not verified |
Oded Goldreich
https://eccc.weizmann.ac.il/report/2021/179#comment1
Your TR21-179 claims a number of results that, if correct, would constitute major breakthroughs in several areas.
Although ECCC does not verify the correctness of reports posted on it, it does try to reach a reasonable level of confidence regarding the correctness of the posted reports, especially in case of claims that address central open problems.
I regret to say that this was not done in the current case.
Furthermore, it is extremely hard to verify (or refute) the various claims, under the current exposition, which means that the paper at its current form does not meet our criterion of "being in readable form" (see CFP).
Mon, 20 Dec 2021 10:42:28 +0200https://eccc.weizmann.ac.il/report/2021/179#comment1
Paper TR21-179
| Smoothed Complexity of Learning Disjunctive Normal Forms, Inverting Fourier Transforms, and Verifying Small Circuits |
tatsuie tsukiji
https://eccc.weizmann.ac.il/report/2021/179This paper aims to derandomize the following problems in the smoothed analysis of Spielman and Teng. Learn Disjunctive Normal Form (DNF), invert Fourier Transforms (FT), and verify small circuits' unsatisfiability. Learning algorithms must predict a future observation from the only $m$ i.i.d. samples of a fixed but unknown joint-distribution $P(G(x),y)$ to explain an $\eta$-noisy target. Inverters must retrieve the hidden parameter $\theta$. The smoothed analysis can weaken the adversarial distribution $P(x,y)$ by injecting an appropriate perturbation $G$ with larger min-entropy $\mathrm{H}_\infty(G)$. We will establish 1--10 for planted functions (Goldreich's PRG) $f_\theta(x) = f(\theta \circ x_1,\ldots,\theta \circ x_d)$ with variables $x_i \in \{0,1,\ldots,2n-1\}$ plugged into $\theta\circ x_i := \theta(\lfloor{x_i/2}\rfloor)\oplus x_i$ in 1--4, $\theta\circ x_i := \theta_i \cdot \lfloor{x_i/2}\rfloor \cdot (-1)^{x_i}\in \mathbb{Z}_p$ in 5--6, and $\theta\circ x_i := \theta(\lfloor{x_i/2}\rfloor) \cdot (-1)^{x_i} \in \mathbb{Z}_p$ in 7. 8--10 will verify the unsatisfiability of small circuits in the worst case analysis (i.e., $\mathrm{H}_\infty(G) = 0$). Suppose $\log n \gg \log (ds/\varepsilon) + k$ in 1--4. Randomly pick an example $(X,Y)$ from the observed $m$ data in 4.
1. At $\mathrm{H}_\infty(G) = 0$, $\mathrm{Max}k\mathrm{CSP}$ of any $k$-variate predicate $f$ requires the sample size $\Omega(n^{(k-\epsilon)/2}) \le m \le \tilde{O}(n^{k/2})$ to distinguish between $|\max_\theta P(y=f_\theta(x)) - \max_\theta P'(y=f_\theta(x))| \ge \Omega(1)$ and $P(x,y) \equiv P'(x,y)$ in $n^{O(k)}$ time by given access to both samplers $P(x,y)$ and $P'(x,y)$.
2. At $\mathrm{H}_\infty(G) = c\log s$, the planted $s$-term DNF demands $m \ge n^{\Omega(\log s)}$ for $c$ less than 1, and $m \le n^{1/2 \log s + O(1)}$ for $c$ greater than 1, to make $n^{O(\log s)}$-time PAC learning (even under a slight noise).
3. At $\mathrm{H}_\infty(G) = c\log 1/\varepsilon$, the $\mathrm{planted}$ $\mathrm{AND}$ needs $m \ge n^{\Omega(\log 1/\varepsilon)}$ for $c$ less than $1$, and $m \le n^{1/2 \log(1/\varepsilon) + O(1)}$ for $c$ greater than $1$, to make $(\eta+\varepsilon)$-accurate agnostic learning in $n^{O(\log 1/\varepsilon)}$-time.
4. At $\mathrm{H}_\infty(G) = O(\log s)$, the $\mathrm{monotone}$ $\mathrm{DNF}$ with expanding $s$-terms is PAC learnable from $m = n\cdot \mathrm{poly}(s)$ data with $\mathrm{Pr}[\lfloor{{X_{i}/2}}\rfloor, \lfloor{{X_{i'}/2}}\rfloor] \ge {1}/{n^{1+\epsilon}}$ in $n \cdot \tilde{O}(s^{\log d})$ time.
5. LPN and LWE over $\mathbb{Z}_p$ of $p\ge n^{\Omega(1)}$ hiding small secrets $\forall i, |\theta_i| = O(1)$ are breakable in polynomial time.
6. GapSVP$_{\tilde{O}(n^2)}$ is breakable within polynomial time.
7. At $\mathrm{H}_\infty(G) = O(n)$, any $2^{n/2}$ by $2^{n/2}$ matrix with sparsity $2^{n-n^{\epsilon}}$ demands $\exp(n^{\Omega(1)})$ size $\mathrm{PH^{cc}}$ protocol unless it is learnable from $m = \exp(n^{\epsilon})$ data in $\exp(n^{\epsilon})$ time.
8. $\mathrm{VP} \ne \mathrm{VNP}$ or $\forall k, \mathrm{quasi}$-$NP \not\subset \mathrm{NC}^k$.
9. $\mathrm{PIT} \in \mathrm{DTIME}[n^{\mathrm{poly}(\log \log n)}]$ or $\forall \epsilon, \forall k, \mathrm{NTIME}[2^{n^{\epsilon}}] \not\subset \mathrm{SIZE}[n^k]$.
10. $\mathrm{quasi}$-$\mathrm{NP} \not\subset \mathrm{TC}^0$. Mon, 13 Dec 2021 16:49:23 +0200https://eccc.weizmann.ac.il/report/2021/179