ECCC - Reportshttps://eccc.weizmann.ac.il/Latest Reports published at https://eccc.weizmann.ac.ilen-usWed, 27 Sep 2023 15:47:02 +0300TR23-146 | On Testing Isomorphism to a Fixed Graph in the Bounded-Degree Graph Model |
Oded Goldreich,
Laliv Tauber
https://eccc.weizmann.ac.il/report/2023/146We consider the problem of testing isomorphism to a fixed graph in the bounded-degree graph model. Our main result is that, for almost all $d$-regular $n$-vertex graphs $H$,
testing isomorphism to $H$ can be done using $\tildeO({\sqrt n})$ queries.
This result is shown to be optimal (up to a polylog factor) by a matching lower bound, which also holds for almost all graphs $H$.
The performance of our tester depends on natural graph parameters of the fixed ($n$-vertex) graph $H$ such as its diameter and the minimum radius of ``distinguishing neighborhoods'' (i.e., the minimum $r=r(n)$ such that the ``$r$-neighborhoods'' of the $n$ different vertices are pairwise non-isomorphic).
Wed, 27 Sep 2023 15:47:02 +0300https://eccc.weizmann.ac.il/report/2023/146TR23-145 | Total Variation Distance Estimation Is as Easy as Probabilistic Inference |
Arnab Bhattacharyya,
Sutanu Gayen,
Kuldeep S. Meel,
Dimitrios Myrisiotis,
A. Pavan,
N. V. Vinodchandran
https://eccc.weizmann.ac.il/report/2023/145In this paper, we establish a novel connection between total variation (TV) distance estimation and probabilistic inference. In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models. This reduction leads to a fully polynomial randomized approximation scheme (FPRAS) for estimating TV distances between distributions over any class of Bayes nets for which there is an efficient probabilistic inference algorithm. In particular, it leads to an FPRAS for estimating TV distances between distributions that are defined by Bayes nets of bounded treewidth. Prior to this work, such approximation schemes only existed for estimating TV distances between product distributions. Our approach employs a new notion of $\mathit{partial}$ couplings of high-dimensional distributions, which might be of independent interest.Sun, 24 Sep 2023 10:49:53 +0300https://eccc.weizmann.ac.il/report/2023/145TR23-144 | Symmetric Exponential Time Requires Near-Maximum Circuit Size |
Lijie Chen,
Shuichi Hirahara,
Hanlin Ren
https://eccc.weizmann.ac.il/report/2023/144We show that there is a language in $\mathrm{S}_2\mathrm{E}/_1$ (symmetric exponential time with one bit of advice) with circuit complexity at least $2^n/n$. In particular, the above also implies the same near-maximum circuit lower bounds for the classes $\Sigma_2\mathrm{E}$, $(\Sigma_2\mathrm{E}\cap\Pi_2\mathrm{E})/_1$, and $\mathrm{ZPE}^{\mathrm{NP}}/_1$. Previously, only "half-exponential" circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was $\Delta_3\mathrm{E} = \mathrm{E}^{\Sigma_2\mathrm{P}}$ (Miltersen, Vinodchandran, and Watanabe COCOON'99).
Our circuit lower bounds are corollaries of an unconditional zero-error pseudodeterministic algorithm with an $\mathrm{NP}$ oracle and one bit of advice ($\mathrm{FZPP}^{\mathrm{NP}}/_1$) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitely-often pseudodeterministic $\mathrm{FZPP}^{\mathrm{NP}}/_1$ constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, and $\mathrm{K}^{\mathrm{poly}}$-random strings with nearly optimal parameters.
Our proofs relativize. The two main technical ingredients are (1) Korten's $\mathrm{P}^{\mathrm{NP}}$ reduction from the range avoidance problem to constructing hard truth tables (FOCS'21), which was in turn inspired by a result of Je?ábek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS'23).
Fri, 22 Sep 2023 18:01:13 +0300https://eccc.weizmann.ac.il/report/2023/144TR23-143 | Counting Unpredictable Bits: A Simple PRG from One-way Functions |
Noam Mazor,
Rafael Pass
https://eccc.weizmann.ac.il/report/2023/143A central result in the theory of Cryptography, by Hastad, Imagliazzo, Luby and Levin [SICOMP’99], demonstrates that the existence one-way functions (OWF) implies the existence of pseudo-random generators (PRGs). Despite the fundamental importance of this result, and several elegant improvements/simplifications, analyses of constructions of PRGs from OWFs remain complex (both conceptually and technically).
Our goal is to provide a construction of a PRG from OWFs with a simple proof of security; we thus focus on the setting of non-uniform security (i.e., we start off with a OWF secure against non-uniform PPT, and we aim to get a PRG secure against non-uniform PPT).
Our main result is a construction of a PRG from OWFs with a self-contained, simple, proof of security, relying only on the Goldreich-Levin Theorem (and the Chernoff bound). Although our main goal is simplicity, the construction, and a variant there-of, also improves the efficiency—in terms of invocations and seed lengths—of the state-of-the-art constructions due to [Haitner-Reingold-Vadhan, STOC’10] and [Vadhan-Zheng, STOC’12], by a factor O(log^2 n).
The key novelty in our analysis is a generalization of the Blum-Micali [FOCS’82] notion of unpredictabilty—rather than requiring that every bit in the output of a function is unpredictable, we count how many unpredictable bits a function has, and we show that any OWF on n input bits (after hashing the input and the output) has n + O(log n) unpredictable output bits. Such unpredictable bits can next be “extracted” into a pseudorandom string using standard techniques.Fri, 22 Sep 2023 08:03:07 +0300https://eccc.weizmann.ac.il/report/2023/143TR23-142 | Provable Advantage in Quantum PAC Learning |
Wilfred Salmon,
Sergii Strelchuk,
Tom Gur
https://eccc.weizmann.ac.il/report/2023/142We revisit the problem of characterising the complexity of Quantum PAC learning, as introduced by Bshouty and Jackson [SIAM J. Comput.
1998, 28, 1136–1153]. Several quantum advantages have been demonstrated in this setting, however, none are generic: they apply to particular concept classes and typically only work when the distribution that generates the data is known. In the general case, it was recently shown by Arunachalam and de Wolf [JMLR, 19 (2018) 1-36] that quantum PAC learners can only achieve constant factor advantages over classical PAC learners.
We show that with a natural extension of the definition of quantum PAC learning used by Arunachalam and de Wolf, we can achieve a generic advantage in quantum learning. To be precise, for any concept class $\mathcal{C}$ of VC dimension $d$, we show there is an $(\epsilon, \delta)$-quantum PAC learner with sample complexity
\[
O\left(\frac{1}{\sqrt{\epsilon}}\left[d+ \log(\frac{1}{\delta})\right]\log^9(1/\epsilon)\right).
\]
Up to polylogarithmic factors, this is a square root improvement over the classical learning sample complexity. We show the tightness of our result by proving an $\Omega(d/\sqrt{\epsilon})$ lower bound that matches our upper bound up to polylogarithmic factors.Thu, 21 Sep 2023 23:52:23 +0300https://eccc.weizmann.ac.il/report/2023/142TR23-141 | A Tight Lower Bound of $\Omega(\log n)$ for the Estimation of the Number of Defective Items |
Nader Bshouty,
Gergely Harcos
https://eccc.weizmann.ac.il/report/2023/141Let $X$ be a set of items of size $n$ , which may contain some defective items denoted by $I$, where $I \subseteq X$. In group testing, a {\it test} refers to a subset of items $Q \subset X$. The test outcome is $1$ (positive) if $Q$ contains at least one defective item, i.e., $Q\cap I \neq \emptyset$, and $0$ (negative) otherwise.
We give a novel approach to obtaining tight lower bounds in non-adaptive randomized group testing. Employing this new method, we can prove the following result.
Any non-adaptive randomized algorithm that, for any set of defective items $I$, with probability at least $2/3$, returns an estimate of the number of defective items $|I|$ to within a constant factor requires at least
$\Omega({\log n})$ tests.
Our result matches the upper bound of $O(\log n)$ and solves the open problem posed by Damaschke and Sheikh Muhammad.Wed, 20 Sep 2023 09:15:23 +0300https://eccc.weizmann.ac.il/report/2023/141TR23-140 | Extractors for Polynomial Sources over $\mathbb{F}_2$ |
Mohit Gurumukhani,
Jesse Goodman,
Eshan Chattopadhyay
https://eccc.weizmann.ac.il/report/2023/140We explicitly construct the first nontrivial extractors for degree $d \ge 2$ polynomial sources over $\mathbb{F}_2^n$. Our extractor requires min-entropy $k\geq n - \frac{\sqrt{\log n}}{(d\log \log n)^{d/2}}$. Previously, no constructions were known, even for min-entropy $k\geq n-1$. A key ingredient in our construction is an input reduction lemma, which allows us to assume that any polynomial source with min-entropy $k$ can be generated by $O(k)$ uniformly random bits.
We also provide strong formal evidence that polynomial sources are unusually challenging to extract from, by showing that even our most powerful general purpose extractors cannot handle polynomial sources with min-entropy below $k\geq n-o(n)$. In more detail, we show that sumset extractors cannot even disperse from degree $2$ polynomial sources with min-entropy $k\geq n-O(n/\log\log n)$. In fact, this impossibility result even holds for a more specialized family of sources that we introduce, called polynomial non-oblivious bit-fixing (NOBF) sources. Polynomial NOBF sources are a natural new family of algebraic sources that lie at the intersection of polynomial and variety sources, and thus our impossibility result applies to both of these classical settings. This is especially surprising, since we do have variety extractors that slightly beat this barrier - implying that sumset extractors are not a panacea in the world of seedless extraction.Wed, 20 Sep 2023 09:03:45 +0300https://eccc.weizmann.ac.il/report/2023/140TR23-139 | Deterministic Algorithms for Low Degree Factors of Constant Depth Circuits |
Varun Ramanathan,
Mrinal Kumar,
Ramprasad Saptharishi
https://eccc.weizmann.ac.il/report/2023/139For every constant $d$, we design a subexponential time deterministic algorithm that takes as input a multivariate polynomial $f$ given as a constant depth algebraic circuit over the field of rational numbers, and outputs all irreducible factors of $f$ of degree at most $d$ together with their respective multiplicities. Moreover, if $f$ is a sparse polynomial, then the algorithm runs in quasipolynomial time.
Our results are based on a more fine-grained connection between polynomial identity testing (PIT) and polynomial factorization in the context of constant degree factors and rely on a clean connection between divisibility testing of polynomials and PIT due to Forbes and on subexponential time deterministic PIT algorithms for constant depth algebraic circuits from the recent work of Limaye, Srinivasan and Tavenas.Mon, 18 Sep 2023 15:22:53 +0300https://eccc.weizmann.ac.il/report/2023/139TR23-138 | An improved protocol for ExactlyN with more than 3 players |
Lianna Hambardzumyan,
Toniann Pitassi,
Suhail Sherif,
Morgan Shirley,
Adi Shraibman
https://eccc.weizmann.ac.il/report/2023/138The ExactlyN problem in the number-on-forehead (NOF) communication setting asks $k$ players, each of whom can see every input but their own, if the $k$ input numbers add up to $N$. Introduced by Chandra, Furst and Lipton in 1983, ExactlyN is important for its role in understanding the strength of randomness in communication complexity with many players. It is also tightly connected to the field of combinatorics: its $k$-party NOF communication complexity is related to the size of the largest corner-free subset in $[N]^{k-1}$.
In 2021, Linial and Shraibman gave more efficient protocols for ExactlyN for 3 players. As an immediate consequence, this also gave a new construction of larger corner-free subsets in $[N]^2$. Later that year Green gave a further refinement to their argument. These results represent the first improvements to the highest-order term for $k=3$ since the famous work of Behrend in 1946. In this paper we give a corresponding improvement to the highest-order term for all $k>3$, the first since Rankin in 1961. That is, we give a more efficient protocol for ExactlyN as well as larger corner-free sets in higher dimensions.
Nearly all previous results in this line of research approached the problem from the combinatorics perspective, implicitly resulting in non-constructive protocols for ExactlyN. Approaching the problem from the communication complexity point of view and constructing explicit protocols for ExactlyN was key to the improvements in the $k=3$ setting. As a further contribution we provide explicit protocols for ExactlyN for any number of players which serves as a base for our improvement.Mon, 18 Sep 2023 08:01:35 +0300https://eccc.weizmann.ac.il/report/2023/138TR23-137 | Communication Lower Bounds of Key-Agreement Protocols via Density Increment Arguments |
Mi-Ying Huang,
Xinyu Mao,
Guangxu Yang,
Jiapeng Zhang
https://eccc.weizmann.ac.il/report/2023/137Constructing key-agreement protocols in the random oracle model (ROM) is a viable method to assess the feasibility of developing public-key cryptography within Minicrypt. Unfortunately, as shown by Impagliazzo and Rudich (STOC 1989) and Barak and Mahmoody (Crypto 2009), such protocols can only guarantee limited security: any $\ell$-query protocol can be attacked by an $O(\ell^2)$-query adversary. This quadratic gap matches the key-agreement protocol proposed by Merkle (CACM 78), known as Merkle's Puzzles.
Besides query complexity, the communication complexity of key-agreement protocols in the ROM is also an interesting question in the realm of find-grained cryptography, even though only limited security is achievable. Haitner et al. (ITCS 2019) first observed that in Merkle's Puzzles, to obtain secrecy against an eavesdropper with $O(\ell^2)$ queries, the honest parties must exchange $\Omega(\ell)$ bits.
Therefore, they conjectured that high communication complexity is unavoidable, any $\ell$-query protocols with $c$ bits of communication could be attacked by an $O(c\cdot \ell)$-query adversary.
This, if true, will suggest that Merkle's Puzzle is also optimal regarding communication complexity. Building upon techniques from communication complexity, Haitner et al. (ITCS 2019) confirmed this conjecture for two types of key agreement protocols with certain natural properties.
This work affirms the above conjecture for all non-adaptive protocols with perfect completeness.
Our proof uses a novel idea called density increment argument. This method could be of independent interest as it differs from previous communication lower bounds techniques (and bypasses some technical barriers). Mon, 18 Sep 2023 07:59:58 +0300https://eccc.weizmann.ac.il/report/2023/137