ECCC - Reportshttps://eccc.weizmann.ac.il/Latest Reports published at https://eccc.weizmann.ac.ilen-usTue, 27 Sep 2022 15:24:21 +0300TR22-137 | On blocky ranks of matrices |
Daniel Avraham ,
Amir Yehudayoff
https://eccc.weizmann.ac.il/report/2022/137A matrix is blocky if it is a blowup of a permutation matrix. The blocky rank of a matrix M is the minimum number of blocky matrices that linearly span M. Hambardzumyan, Hatami and Hatami defined blocky rank and showed that it is connected to communication complexity and operator theory. We describe additional connections to circuit complexity and combinatorics, and we prove upper and lower bounds on blocky rank in various contexts.
Tue, 27 Sep 2022 15:24:21 +0300https://eccc.weizmann.ac.il/report/2022/137TR22-136 | Rounds vs Communication Tradeoffs for Maximal Independent Sets |
Sepehr Assadi,
Gillat Kol,
Zhijun Zhang
https://eccc.weizmann.ac.il/report/2022/136We consider the problem of finding a maximal independent set (MIS) in the shared blackboard communication model with vertex-partitioned inputs. There are $n$ players corresponding to vertices of an undirected graph, and each player sees the edges incident on its vertex -- this way, each edge is known by both its endpoints and is thus shared by two players. The players communicate in simultaneous rounds by posting their messages on a shared blackboard visible to all players, with the goal of computing an MIS of the graph. While the MIS problem is well studied in other distributed models, and while shared blackboard is, perhaps, the simplest broadcast model, lower bounds for our problem were only known against one-round protocols.
We present a lower bound on the round-communication tradeoff for computing an MIS in this model. Specifically, we show that when $r$ rounds of interaction are allowed, at least one player needs to communicate $\Omega(n^{1/20^{r+1}})$ bits. In particular, with logarithmic bandwidth, finding an MIS requires $\Omega(\log\log{n})$ rounds. This lower bound can be compared with the algorithm of Ghaffari, Gouleakis, Konrad, Mitrovi ?c, and Rubinfeld [PODC 2018] that solves MIS in $O(\log\log{n})$ rounds but with a logarithmic bandwidth for an average player. Additionally, our lower bound further extends to the closely related problem of maximal bipartite matching.
The presence of edge-sharing gives the algorithms in our model a surprising power and numerous algorithmic results exploiting this power are known. For a similar reason, proving lower bounds in this model is much more challenging, as this sharing in the players' inputs prohibits the use of standard number-in-hand communication complexity arguments. Thus, to prove our results, we devise a new round elimination framework, which we call partial-input embedding, that may also be useful in future work for proving round-sensitive lower bounds in the presence of shared inputs.
Finally, we discuss several implications of our results to multi-round (adaptive) distributed sketching algorithms, broadcast congested clique, and to the welfare maximization problem in two-sided matching markets.Sun, 25 Sep 2022 09:24:48 +0300https://eccc.weizmann.ac.il/report/2022/136TR22-135 | Decision Tree Complexity versus Block Sensitivity and Degree |
Swagato Sanyal,
Supartha Poddar,
Rahul Chugh
https://eccc.weizmann.ac.il/report/2022/135Relations between the decision tree complexity and various other complexity measures of Boolean functions is a thriving topic of research in computational complexity. While decision tree complexity is long known to be polynomially related with many other measures, the optimal exponents of many of these relations are not known. It is known that decision tree complexity is bounded above by the cube of block sensitivity, and the cube of polynomial degree. However, the widest separation between decision tree complexity and each of block sensitivity and degree that is witnessed by known Boolean functions is quadratic.
Proving quadratic relations between these measures would resolve several open questions in decision tree complexity. For example, we get a tight relation between decision tree complexity and square of randomized decision tree complexity and a tight relation between zero-error randomized decision tree complexity and square of fractional block sensitivity, resolving an open question raised by Aaronson. In this work, we investigate the tightness of the existing cubic upper bounds.
We improve the cubic upper bounds for many interesting classes of Boolean functions. We show that for graph properties and for functions with a constant number of alternations, both of the cubic upper bounds can be improved to quadratic. We define a class of Boolean functions, which we call the zebra functions, that comprises Boolean functions where each monotone path from $0^n$ to $1^n$ has an equal number of alternations. This class contains the symmetric and monotone functions as its subclasses. We show that for any zebra function, decision tree complexity is at most the square of block sensitivity, and certificate complexity is at most the square of degree.
Finally, we show using a lifting theorem of communication complexity by G{\"{o}}{\"{o}}s, Pitassi and Watson that the task of proving an improved upper bound on the decision tree complexity for all functions is in a sense equivalent to the potentially easier task of proving a similar upper bound on communication complexity for each bi-partition of the input variables, for all functions. In particular, this implies that to bound the decision tree complexity it suffices to bound smaller measures like parity decision tree complexity, subcube decision tree complexity and decision tree rank, that are defined in terms of models that can be efficiently simulated by communication protocols.Sun, 25 Sep 2022 09:22:55 +0300https://eccc.weizmann.ac.il/report/2022/135TR22-134 | Some Games on Turing Machines and Power from Random Strings |
Alexey Milovanov,
Greg McLellan
https://eccc.weizmann.ac.il/report/2022/134Denote by $R$ the set of strings with high Kolmogorov complexity. In [E. Allender, H. Buhrman, M. Kouck\'y, D. van Melkebeek, and D. Ronneburger.
Power from random strings.
\emph{SIAM Journal on Computing}, 35:1467--1493, 2006.] the idea of using $R$ as an oracle for resource-bounded computation models was presented. This idea was later developed in several others papers.
We prove new lower bounds for $Q^R_{tt}$ and $Q^R_{sa}$:
- Oblivious-NP is subset of $Q^R_{tt}$;
- Oblivious-MA is subset of $Q^R_{sa}$.
Here $Q$ means quazi-polynomial-time; ``sa'' means sub-adaptive
reduction - a new type of reduction that we introduce. This type of reduction is not weaker than truth-table reduction and is not stronger than Turing reduction.
Also we prove upper bounds for BBP^R_{tt} and P^R_{sa} following [E. Allender, L. Friedman, and W. Gasarch. Limits on the computational power of random
strings.]:
P^R_{sa} is subset of EXP
BBP^R_{tt} is subset of AEXP(poly).
Here AEXP(poly) is the class of languages decidable in exponential time by an alternating Turing machine that switches from an existential to a universal state or vice versa at most polynomial times.
Finally we analyze some games that originate in [E. Allender, L. Friedman, and W. Gasarch. Limits on the computational power of random
strings.]. We prove completeness of these games. These results show that methods in this can not prove better upper bounds for P^R, NP^R and P^R_{tt} than known.Sun, 25 Sep 2022 09:21:39 +0300https://eccc.weizmann.ac.il/report/2022/134TR22-133 | Downward Self-Reducibility in TFNP |
Prahladh Harsha,
Daniel Mitropolsky,
Alon Rosen
https://eccc.weizmann.ac.il/report/2022/133A problem is downward self-reducible if it can be solved efficiently given an oracle that returns
solutions for strictly smaller instances. In the decisional landscape, downward self-reducibility is
well studied and it is known that all downward self-reducible problems are in PSPACE. In this
paper, we initiate the study of downward self-reducible search problems which are guaranteed to
have a solution — that is, the downward self-reducible problems in TFNP. We show that most
natural PLS-complete problems are downward self-reducible and any downward self-reducible
problem in TFNP is contained in PLS. Furthermore, if the downward self-reducible problem
is in UTFNP (i.e. it has a unique solution), then it is actually contained in CLS. This implies
that if integer factoring is downward self-reducible then it is in fact in CLS, suggesting that no
efficient factoring algorithm exists using the factorization of smaller numbers.Tue, 20 Sep 2022 21:28:17 +0300https://eccc.weizmann.ac.il/report/2022/133TR22-132 | Fooling polynomials using invariant theory |
Harm Derksen,
Emanuele Viola
https://eccc.weizmann.ac.il/report/2022/132We revisit the problem of constructing explicit pseudorandom generators
that fool with error $\epsilon$ degree-$d$ polynomials in $n$ variables
over the field $F_q$, in the case of large $q$. Previous constructions
either have seed length at least $2^{d}\log q$, and thus are only non-trivial
when the degree is less than $\log n$, or else rely on a seminal reduction by Bogdanov
(STOC 2005). This reduction yields seed length not less than $d^{4} \log n + \log q$
and requires fields of size at least $d^{6}/\epsilon^{2}$; and explicit generators
meeting such bounds are known.
Departing from Bogdanov's reduction, we develop an algebraic analogue
of the Bogdanov-Viola paradigm (FOCS 2007, SICOMP 2010) of summing
generators for degree-one polynomials. Whereas previous analyses of
the paradigm are restricted to degree less than $\log n$, we give a new analysis
which handles large degrees. A main new idea is to show that the construction
preserves indecomposability of polynomials. Apparently for the first
time in the area, the proof uses invariant theory.
Our approach in particular yields several new pseudorandom generators.
In particular, for large enough fields we obtain seed length $O(d\log n+\log q)$
which is optimal up to constant factors. We also construct generators
for fields of size as small as $O(d^{4})$. Further reducing the field
size requires a significant change in techniques: Most or all generators
for large-degree polynomials rely on Weil bounds; but such bounds
are only applicable when $q>d^{4}$.Sun, 18 Sep 2022 23:32:26 +0300https://eccc.weizmann.ac.il/report/2022/132TR22-131 | Radical Sylvester-Gallai for Cubics |
Rafael Mendes de Oliveira,
Akash Sengupta
https://eccc.weizmann.ac.il/report/2022/131Let $\mathcal{F} = \{F_1, \ldots, F_m\}$ be a finite set of irreducible homogeneous multivariate polynomials of degree at most $3$ such that $F_i$ does not divide $F_j$ for $i\neq j$.
We say that $\mathcal{F}$ is a cubic radical Sylvester-Gallai configuration if for any two distinct $F_i,F_j$ there exists a third polynomial $F_k$ such that whenever $F_i,F_j$ vanish, $F_k$ also vanishes. In particular, for any two indices $i, j \in [m]$, there exists $k \in [m] \setminus \{i,j\}$ such that $F_k \in \rad(F_i, F_j)$.
We prove that any cubic radical Sylvester-Gallai configuration is low-dimensional, that is
$$ \dim span_{\mathbb{K}}{\mathcal{F}} = O(1).$$
This solves a conjecture of Gupta [G14] in degree $3$ and generalizes the result in [S20], which proved that quadratic radical Sylvester-Gallai configurations are low-dimensional.
Our result takes us one step closer towards solving the non-linear Sylvester-Gallai conjectures of Gupta [G14], which would yield the first deterministic polynomial time algorithm for the PIT problem for depth-4 circuits of bounded top and bottom fanins.
To prove our Sylvester-Gallai theorem, we develop several new tools combining techniques from algebraic geometry and elimination theory.
Among our technical contributions, we prove a structure theorem characterizing non-radical ideals generated by two cubic forms, generalizing the structure theorems of [HP94, CTSSD87, S20].
Moreover, building upon the groundbreaking work [AH20a], we introduce the notion of wide Ananyan-Hochster algebras and show that these algebras allow us to transfer the local conditions of Sylvester-Gallai configurations into global conditions. Sun, 18 Sep 2022 11:52:56 +0300https://eccc.weizmann.ac.il/report/2022/131TR22-130 | A Borsuk-Ulam lower bound for sign-rank and its application |
Hamed Hatami,
Kaave Hosseini,
Xiang Meng
https://eccc.weizmann.ac.il/report/2022/130 We introduce a new topological argument based on the Borsuk-Ulam theorem to prove a lower bound on sign-rank.
This result implies the strongest possible separation between randomized and unbounded-error communication complexity. More precisely, we show that for a particular range of parameters, the randomized communication complexity of the Gap Hamming Distance problem is $O(1)$ while its unbounded-error communication complexity is $\Omega(\log(n))$.
Previously, it was unknown whether the unbounded-error communication complexity could be asymptotically larger than the randomized communication complexity.
In connection to learning theory, we prove that, despite its learnability properties, the class of large margin half-spaces in $\mathbb{R}^d$ is genuinely high-dimensional, i.e., it cannot be embedded in $\mathbb{R}^{d-1}$. This result is closely related to a recent conjecture of Alon, Hanneke, Holzman, and Moran (FOCS 2021) about the VC dimension of this class.
Our final application is to the theory of dimension reductions. The Johnson-Lindenstrauss theorem implies that any set of $N$ unit vectors is embeddable in dimension $O(\gamma^{-2}\log N)$ without altering the signs of those pairwise inner products that have absolute values at least $\gamma>0$. Our result establishes the tightness of this bound, which answers a question of Linial, Mendelson, Schechtman, and Shraibman (Combinatorica, 27(2007)) in the case of partial functions.
Fri, 16 Sep 2022 05:54:31 +0300https://eccc.weizmann.ac.il/report/2022/130TR22-129 | Binary Codes with Resilience Beyond 1/4 via Interaction |
Klim Efremenko,
Gillat Kol,
Raghuvansh Saxena,
Zhijun Zhang
https://eccc.weizmann.ac.il/report/2022/129In the reliable transmission problem, a sender, Alice, wishes to transmit a bit-string x to a remote receiver, Bob, over a binary channel with adversarial noise. The solution to this problem is to encode x using an error correcting code. As it is long known that the distance of binary codes is at most 1/2, reliable transmission is possible only if the channel corrupts (flips) at most a 1/4-fraction of the communicated bits.
We revisit the reliable transmission problem in the two-way setting, where both Alice and Bob can send bits to each other. Our main result is the construction of two-way error correcting codes that are resilient to a constant fraction of corruptions strictly larger than 1/4. Moreover, our code has constant rate and requires Bob to only send one short message. We mention that our result resolves an open problem by Haeupler, Kamath, and Velingker [APPROX-RANDOM, 2015] and by Gupta, Kalai, and Zhang [STOC, 2022].
Curiously, our new two-way code requires a fresh perspective on classical error correcting codes: While classical codes have only one distance guarantee for all pairs of codewords (i.e., the minimum distance), we construct codes where the distance between a pair of codewords depends on the “compatibility” of the messages they encode. We also prove that such codes are necessary for our result.Thu, 15 Sep 2022 22:10:17 +0300https://eccc.weizmann.ac.il/report/2022/129TR22-128 | PPP-Completeness and Extremal Combinatorics |
Romain Bourneuf,
Lukáš Folwarczný,
Pavel Hubacek,
Alon Rosen,
Nikolaj Schwartzbach
https://eccc.weizmann.ac.il/report/2022/128Many classical theorems in combinatorics establish the emergence of substructures within sufficiently large collections of objects. Well-known examples are Ramsey's theorem on monochromatic subgraphs and the Erdos-Rado sunflower lemma. Implicit versions of the corresponding total search problems are known to be PWPP-hard; here "implicit" means that the collection is represented by a poly-sized circuit inducing an exponentially large number of objects.
We show that several other well-known theorems from extremal combinatorics - including Erdos-Ko-Rado, Sperner, and Cayley's formula - give rise to complete problems for PWPP and PPP. This is in contrast to the Ramsey and Erdos-Rado problems, for which establishing inclusion in PWPP has remained elusive. Besides significantly expanding the set of problems that are complete for PWPP and PPP, our work identifies some key properties of combinatorial proofs of existence that can give rise to completeness for these classes.
Our completeness results rely on efficient encodings for which finding collisions allows extracting the desired substructure. These encodings are made possible by the tightness of the bounds for the problems at hand (tighter than what is known for Ramsey's theorem and the sunflower lemma). Previous techniques for proving bounds in TFNP invariably made use of structured algorithms. Such algorithms are not known to exist for the theorems considered in this work, as their proofs "from the book" are non-constructive.Tue, 13 Sep 2022 20:22:41 +0300https://eccc.weizmann.ac.il/report/2022/128