We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem.
Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems:
1) We give a quasipolynomial-time algorithm that approximates $\max_{\|x\|_2=1} P(x)$ within an additive factor of $\epsilon \| P\|$ additive approximation, where $\epsilon >0$ is a constant, $P$ is a degree $d=O(1)$, $n$-variate polynomial with nonnegative coefficients, and $\| P \|$ is the spectral norm of a matrix corresponding to $P$'s coefficients.
Beyond being of interest in its own right, obtaining such an approximation for general polynomials (with possibly negative coefficients) is a long-standing open question in quantum information theory, and our techniques have already led to improved results in this area (Brand\~{a}o and Harrow, STOC '13).
2) We give a polynomial-time algorithm that, given a subspace $V \subseteq \mathbb{R}^n$ of dimension $d$ that (almost) contains the characteristic function of a set of size $n/k$,
finds a vector $v\in V$ that satisfies $\mathbb{E}_i v_i^4 \geq \Omega(d^{-1/3} k(\mathbb{E}_i v_i^2)^2)$.
This is a natural analytical relaxation of the problem of finding the sparsest element in a subspace, and is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012).
In particular our results yield an improvement of the previous best known algorithms for small set expansion in a certain range of parameters.
3) We use this notion of $L_4$ vs. $L_2$ sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted sparse vector $v$ in a random $d$-dimensional subspace of $\mathbb{R}^n$.
If $v$ has $\mu n$ nonzero coordinates, we can recover it with high probability whenever $\mu\leq O(\min(1,n/d^2))$. In particular, when $d\leq \sqrt{n}$, this recovers a planted vector with up to $\Omega(n)$ nonzero coordinates.
When $d\leq n^{2/3}$, our algorithm improves upon existing methods based on comparing the $L_1$ and $L_\infty$ norms, which intrinsically require $\mu \leq O\left(1/\sqrt{d}\right)$.