We give improved lower bounds for binary $3$-query locally correctable codes (3-LCCs) $C \colon \{0,1\}^k \rightarrow \{0,1\}^n$. Specifically, we prove:
(1) If $C$ is a linear design 3-LCC, then $n \geq 2^{(1 - o(1))\sqrt{k} }$. A design 3-LCC has the additional property that the correcting sets for every ...
more >>>
Over the last twenty years, an exciting interplay has emerged between proof systems and algorithms. Some natural families of algorithms can be viewed as a generic translation from a proof that a solution exists into an algorithm for finding the solution itself. This connection has perhaps been the most consequential ... more >>>
Several works have shown unconditional hardness (via integrality gaps) of computing equilibria using strong hierarchies of convex relaxations. Such results however only apply to the problem of computing equilibria that optimize a certain objective function and not to the (arguably more fundamental) task of finding \emph{any} equilibrium.
We present ... more >>>
Dinur, Khot, Kindler, Minzer and Safra (2016) recently showed that the (imperfect completeness variant of) Khot's 2 to 2 games conjecture follows from a combinatorial hypothesis about the soundness of a certain ``Grassmanian agreement tester''.
In this work, we show that the hypothesis of Dinur et al follows from a ...
more >>>
We prove that for every function $G\colon\{0,1\}^n \rightarrow \mathbb{R}^m$, if every output of $G$ is a polynomial (over $\mathbb{R}$) of degree at most $d$ of at most $s$ monomials and $m > \widetilde{O}(sn^{\lceil d/2 \rceil})$, then there is a polynomial time algorithm that can distinguish a vector of the form ... more >>>
For every constant $\epsilon>0$, we give an $\exp(\tilde{O}(\sqrt{n}))$-time algorithm for the $1$ vs $1-\epsilon$ Best Separable State (BSS) problem of distinguishing, given an $n^2\times n^2$ matrix $M$ corresponding to a quantum measurement, between the case that there is a separable (i.e., non-entangled) state $\rho$ that $M$ accepts with probability $1$, ... more >>>
We prove that with high probability over the choice of a random graph $G$ from the Erd\H{o}s-R\'enyi distribution $G(n,1/2)$, the $n^{O(d)}$-time degree $d$ Sum-of-Squares semidefinite programming relaxation for the clique problem will give a value of at least $n^{1/2-c(d/\log n)^{1/2}}$ for some constant $c>0$.
This yields a nearly tight ...
more >>>
We introduce a simple model illustrating the role of context in communication and the challenge posed by uncertainty of knowledge of context. We consider a variant of distributional communication complexity where Alice gets some information $x$ and Bob gets $y$, where $(x,y)$ is drawn from a known distribution, and Bob ... more >>>
We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in computer science and show that ... more >>>
Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class $\mathcal{C} \subseteq P/poly$ of Boolean circuits is exactly learnable with membership and equivalence queries in polynomial-time, then $EXP^{NP} \not \subseteq \mathcal{C}$ (the class $EXP^{NP}$ was subsequently improved to $P$ by Hitchcock and ... more >>>
Let $X \subseteq \mathbb{R}^{n}$ and let ${\mathcal C}$ be a class of functions mapping $\mathbb{R}^{n} \rightarrow \{-1,1\}.$ The famous VC-Theorem states that a random subset $S$ of $X$ of size $O(\frac{d}{\epsilon^{2}} \log \frac{d}{\epsilon})$, where $d$ is the VC-Dimension of ${\mathcal C}$, is (with constant probability) an $\epsilon$-approximation for ${\mathcal C}$ ... more >>>
We show that all non-negative submodular functions have high noise-stability. As a consequence, we obtain a polynomial-time learning algorithm for this class with respect to any product distribution on $\{-1,1\}^n$ (for any constant accuracy parameter $\epsilon$ ). Our algorithm also succeeds in the agnostic setting. Previous work on learning submodular ... more >>>