One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq \mathbf{NC}^{1}$). Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995) suggested to approach this problem by proving that depth complexity of a composition of functions $f \diamond g$ is roughly ... more >>>
Håstad showed that any De Morgan formula (composed of AND, OR and NOT gates) shrinks by a factor of $O(p^{2})$ under a random restriction that leaves each variable alive independently with probability $p$ [SICOMP, 1998]. Using this result, he gave an $\widetilde{\Omega}(n^{3})$ formula size lower bound for the Andreev function, ... more >>>
One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}^1$). Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995) suggested to approach this problem by proving that depth complexity behaves “as expected” with respect to the composition of functions $f ... more >>>
We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph $G$ can be reversibly pebbled in time $t$ and space $s$ if and only if there is a Nullstellensatz refutation of the pebbling formula over $G$ in size $t+1$ ... more >>>
We significantly strengthen and generalize the theorem lifting Nullstellensatz degree to monotone span program size by Pitassi and Robere (2018) so that it works for any gadget with high enough rank, in particular, for useful gadgets such as equality and greater-than. We apply our generalized theorem to solve two open ... more >>>
One of the major open problems in complexity theory is proving super-logarithmic
lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}^1$). Karchmer, Raz, and Wigderson (Computational Complexity 5, 3/4) suggested to approach this problem by proving that depth complexity behaves "as expected" with respect to the composition of functions $f ...
more >>>
Lifting theorems are theorems that relate the query complexity of a function $f:\left\{ 0,1 \right\}^n\to \left\{ 0,1 \right\}$ to the communication complexity of the composed function $f\circ g^n$, for some “gadget” $g:\left\{ 0,1 \right\}^b\times \left\{ 0,1 \right\}^b\to \left\{ 0,1 \right\}$. Such theorems allow transferring lower bounds from query complexity to ... more >>>
Consider a random sequence of $n$ bits that has entropy at least $n-k$, where $k\ll n$. A commonly used observation is that an average coordinate of this random sequence is close to being uniformly distributed, that is, the coordinate “looks random”. In this work, we prove a stronger result that ... more >>>
The direct-sum question is a classical question that asks whether
performing a task on $m$ independent inputs is $m$ times harder
than performing it on a single input. In order to study this question,
Beimel et. al (Computational Complexity 23(1), 2014) introduced the following related problems:
* The choice ... more >>>
The composition of two Boolean functions $f:\left\{0,1\right\}^{m}\to\left\{0,1\right\}$, $g:\left\{0,1\right\}^{n}\to\left\{0,1\right\}$
is the function $f \diamond g$ that takes as inputs $m$ strings $x_{1},\ldots,x_{m}\in\left\{0,1\right\}^{n}$
and computes
\[
(f \diamond g)(x_{1},\ldots,x_{m})=f\left(g(x_{1}),\ldots,g(x_{m})\right).
\]
This operation has been used several times for amplifying different
hardness measures of $f$ and $g$. This comes at a cost: the ...
more >>>
One of the important challenges in circuit complexity is proving strong
lower bounds for constant-depth circuits. One possible approach to
this problem is to use the framework of Karchmer-Wigderson relations:
Karchmer and Wigderson (SIDMA 3(2), 1990) observed that for every Boolean
function $f$ there is a corresponding communication problem $\mathrm{KW}_{f}$,
more >>>
The universal relation is the communication problem in which Alice and Bob get as inputs two distinct strings, and they are required to find a coordinate on which the strings differ. The study of this problem is motivated by its connection to Karchmer-Wigderson relations, which are communication problems that are ... more >>>
One of the major challenges of the research in circuit complexity is proving super-polynomial lower bounds for de-Morgan formulas. Karchmer, Raz, and Wigderson suggested to approach this problem by proving that formula complexity behaves "as expected'' with respect to the composition of functions $f\circ g$. They showed that this conjecture, ... more >>>
An error correcting code is said to be \emph{locally testable} if
there is a test that checks whether a given string is a codeword,
or rather far from the code, by reading only a small number of symbols
of the string. Locally testable codes (LTCs) are both interesting
in their ...
more >>>
Locally-correctable codes (LCCs) and locally-testable codes (LTCs) are codes that admit local algorithms for decoding and testing respectively. The local algorithms are randomized algorithms that make only a small number of queries to their input. LCCs and LTCs are both interesting in their own right, and have important applications in ... more >>>
One of the major open problems in complexity theory is proving super-polynomial lower bounds for circuits with logarithmic depth (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}_1~$). This problem is interesting for two reasons: first, it is tightly related to understanding the power of parallel computation and of small-space computation; second, it is one of the ... more >>>
The PCP theorem (Arora et. al., J. ACM 45(1,3)) asserts the existence of proofs that can be verified by reading a very small part of the proof. Since the discovery of the theorem, there has been a considerable work on improving the theorem in terms of the length of the ... more >>>
The PCP theorem (Arora et. al., J. ACM 45(1,3)) says that every NP-proof can be encoded to another proof, namely, a probabilistically checkable proof (PCP), which can be tested by a verifier that queries only a small part of the PCP. A natural question is how large is the blow-up ... more >>>
The PCP theorem asserts the existence of proofs that can be verified by a verifier that reads only a very small part of the proof. The theorem was originally proved by Arora and Safra (J. ACM 45(1)) and Arora et al. (J. ACM 45(3)) using sophisticated algebraic tools. More than ... more >>>
We initiate a study of input-oblivious proof systems, and present a few preliminary results regarding such systems.
Our results offer a perspective on the intersection of the non-uniform complexity class P/poly with uniform complexity classes such as NP and IP.
In particular, we provide a uniform complexity formulation of the ...
more >>>
The IP theorem, which asserts that IP = PSPACE (Lund et. al., and Shamir, in J. ACM 39(4)), is one of the major achievements of complexity theory. The known proofs of the theorem are based on the arithmetization technique, which transforms a quantified Boolean formula into a related polynomial. The ... more >>>
A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts ... more >>>
We define a non-uniform model of PCPs of Proximity, and observe that in this model the non-uniform verifiers can always be made very efficient. Specifically, we show that any non-uniform verifier can be modified to run in time that is roughly polynomial in its randomness and query complexity.
more >>>An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. Locally Testable Codes (LTCs) were first systematically studied by ... more >>>
Given two codes R,C, their tensor product $R \otimes C$ consists of all matrices whose rows are codewords of R and whose columns are codewords of C. The product $R \otimes C$ is said to be robust if for every matrix M that is far from $R \otimes C$ it ... more >>>
Given linear two codes R,C, their tensor product $R \otimes C$
consists of all matrices whose rows are codewords of R and whose
columns are codewords of C. The product $R \otimes C$ is said to
be robust if for every matrix M that is far from $R \otimes C$
more >>>