Under the auspices of the Computational Complexity Foundation (CCF)

REPORTS > DETAIL:

### Paper:

TR18-103 | 30th April 2018 19:46

#### Relative Error Tensor Low Rank Approximation

TR18-103
Authors: Zhao Song, David Woodruff, Peilin Zhong
Publication: 27th May 2018 19:31
Keywords:

Abstract:

We consider relative error low rank approximation of tensors with respect to the Frobenius norm. Namely, given an order-$q$ tensor $A \in \mathbb{R}^{\prod_{i=1}^q n_i}$, output a rank-$k$ tensor $B$ for which $\|A-B\|_F^2 \leq (1+\epsilon) {\rm OPT}$, where ${\rm OPT} = \inf_{\textrm{rank-}k~A'} \|A-A'\|_F^2$. Despite much success on obtaining relative error low rank approximations for matrices, no such results were known for tensors for arbitrary $(1+\epsilon)$-approximations. One structural issue is that there may be no rank-$k$ tensor $A_k$ achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the rank of a tensor, which is NP-hard. We bypass these two issues via (1) bicriteria and (2) parameterized complexity solutions:

(1) We give an algorithm which outputs a rank $k' = O((k/\epsilon)^{q-1})$ tensor $B$ for which $\|A-B\|_F^2 \leq (1+\epsilon) {\rm OPT}$ in $nnz(A) + n \cdot \poly(k/\epsilon)$ time in the real RAM model, whenever either $A_k$ exists or ${\rm OPT} > 0$. Here $nnz(A)$ denotes the number of non-zero entries in $A$.

(2) We give an algorithm for any $\delta >0$ which outputs a rank $k$ tensor $B$ for which $\|A-B\|_F^2 \leq (1+\epsilon) {\rm OPT}$ and runs in $( nnz(A) + n {\rm poly} (k/\epsilon) + \exp(k^2/\epsilon) ) \cdot n^\delta$ time in the unit cost RAM model.

For outputting a rank-$k$ tensor, or even bicriteria solution with rank-$Ck$ for a certain constant $C>1$, we show a $2^{\Omega(k^{1-o(1)})}$ time lower bound under the Exponential Time Hypothesis.

Our results are based on an iterative existential argument'', and also give the first relative error low rank approximations for tensors for a large number of error measures for which nothing was known. In particular, we give the first relative error approximation algorithms on tensors for: column row and tube subset selection, entrywise $\ell_p$-low rank approximation for $1 \leq p < 2$, low rank approximation with respect to sum of Euclidean norms of faces or tubes, weighted low rank approximation, and low rank approximation in distributed and streaming models. We also obtain several new results for matrices, such as $nnz(A)$-time CUR decompositions, improving the previous $nnz(A) \log n$-time CUR decompositions, which may be of independent interest.

ISSN 1433-8092 | Imprint