Revision #1 Authors: Vijay Bhattiprolu, Mrinalkanti Ghosh, Venkatesan Guruswami, Euiwoong Lee, Madhur Tulsiani

Accepted on: 21st April 2017 06:20

Downloads: 1134

Keywords:

We consider the following basic problem: given an $n$-variate degree-$d$ homogeneous polynomial $f$ with real coefficients, compute a unit vector $x \in \mathbb{R}^n$ that maximizes $|f(x)|$. Besides its fundamental nature, this problem arises in diverse contexts ranging from tensor and operator norms to graph expansion to quantum information theory. The homogeneous degree $2$ case is efficiently solvable as it corresponds to computing the spectral norm of an associated matrix, but the higher degree case is NP-hard.

We give approximation algorithms for this problem that offer a trade-off between the approximation ratio and running time: in $n^{O(q)}$ time, we get an approximation within factor $O_d((n/q)^{d/2-1})$ for arbitrary polynomials, $O_d((n/q)^{d/4-1/2})$ for polynomials with non-negative coefficients, and $O_d(\sqrt{m/q})$ for sparse polynomials with $m$ monomials. The approximation guarantees are with respect to the optimum of the level-$q$ sum-of-squares (SoS) SDP relaxation of the problem (though our algorithms do not rely on actually solving the SDP). Known polynomial time algorithms for this problem rely on ``decoupling lemmas.'' Such tools are not capable of offering a trade-off like our results as they blow up the number of variables by a factor equal to the degree. We develop new decoupling tools that are more efficient in the number of variables at the expense of less structure in the output polynomials. This enables us to harness the benefits of higher level SoS relaxations. Our decoupling methods also work with ``folded polynomials,'' which are polynomials with polynomials as coefficients. This allows us to exploit easy substructures (such as

quadratics) by considering them as coefficients in our algorithms.

We complement our algorithmic results with some polynomially large integrality gaps for $d$-levels of the SoS relaxation. For general

polynomials this follows from known results for \emph{random} polynomials, which yield a gap of $\Omega_d(n^{d/4-1/2})$. For polynomials with non-negative coefficients, we prove an $\tilde{\Omega}(n^{1/6})$ gap for the degree $4$ case, based on a novel distribution of $4$-uniform hypergraphs. We establish an $n^{\Omega(d)}$ gap for general degree $d$, albeit for a slightly weaker (but still very natural) relaxation. Toward this, we give a method to lift a level-$4$ solution matrix $M$ to a higher level solution, under a mild technical condition on $M$.

From a structural perspective, our work yields worst-case convergence results on the performance of the sum-of-squares hierarchy for polynomial optimization. Despite the popularity of SoS in this context, such results were previously only known for the case of $q = \Omega(n)$.

Significant revisions and restructuring in many places; improvement in the SoS gap for non-negative polynomials; addition of method to lift SoS gaps to higher levels (based on what we call the Tetris theorem); certification and SoS gap results for random polynomials no longer appear in this paper.

TR16-185 Authors: Vijay Bhattiprolu, Mrinalkanti Ghosh, Venkatesan Guruswami, Euiwoong Lee, Madhur Tulsiani

Publication: 18th November 2016 18:20

Downloads: 1481

Keywords:

We consider the following basic problem: given an $n$-variate degree-$d$ homogeneous polynomial $f$ with real coefficients, compute a unit vector $x \in \mathbb{R}^n$ that maximizes $|f(x)|$. Besides its fundamental nature, this problem arises in many diverse contexts ranging from tensor and operator norms to graph expansion to quantum information theory. The homogeneous degree $2$ case is efficiently solvable as it corresponds to computing the spectral norm of an associated matrix, but the higher degree case is NP-hard.

In this work, we give multiplicative approximation algorithms for this problem. Our algorithms leverage the tractability of the degree $2$ case, and output the best solution among a carefully constructed set of quadratic polynomials. They offer a trade-off between the approximation ratio and running time, which is governed by the number of quadratic problems we search over. Specifically, in $n^{O(q)}$ time, we get an approximation within factor $O_d((n/q)^{d/2-1})$ for arbitrary polynomials, and $O_d((n/q)^{d/4-1/2})$ for polynomials with non-negative coefficients. The approximation guarantees are with respect to the optimum of the level-$q$ SoS SDP relaxation of the problem, which the algorithm rounds to a unit vector. We also consider the case when $f$ is random with independent $\pm 1$ coefficients, and prove that w.h.p the level-$q$ SoS solution gives a certificate within factor $\tilde{O}_d((n/q)^{d/4-1/2})$ of the optimum.

We complement our algorithmic results with some polynomially large integrality gaps for $d$-levels of the SoS relaxation. For the random polynomial case, we show a gap of $\Omega_d(n^{d/4-1/2})$, which precisely matches the exponent of our upper bound, and shows the necessity of our $\Omega(d)$ exponent in the approximation ratio for general polynomials. For polynomials with non-negative coefficients, we show an $\tilde{\Omega}(n^{1/12})$ gap for the $d=4$ case.

To obtain our results, we develop general techniques which help analyze the approximation obtained by higher levels of the SoS hierarchy. We believe these techniques will also be useful in understanding polynomial optimization for other constrained settings.