ECCC - Reportshttps://eccc.weizmann.ac.il/Latest Reports published at https://eccc.weizmann.ac.ilen-usThu, 07 Nov 2024 00:38:49 +0200TR24-171 | Condensing against Online Adversaries |
Eshan Chattopadhyay,
Mohit Gurumukhani,
Noam Ringach
https://eccc.weizmann.ac.il/report/2024/171We investigate the task of deterministically condensing randomness from Online Non-Oblivious Symbol Fixing (oNOSF) sources, a natural model of defective random sources for which it is known that extraction is impossible [AORSV, EUROCRYPT'20]. A $(g,\ell)$-oNOSF source is a sequence of $\ell$ blocks $\mathbf{X} = (\mathbf{X}_1, \dots, \mathbf{X}_{\ell})\sim (\{0, 1\}^{n})^{\ell}$, where at least $g$ of the blocks are good (are independent and have some min-entropy), and the remaining bad blocks are controlled by an online adversary where each bad block can be arbitrarily correlated with any block that appears before it.
The existence of condensers was recently studied in [CGR, FOCS'24]. They proved condensing impossibility results for various values of $g$ and $\ell$, and they showed the existence of condensers matching the impossibility results in the special case when $n$ is extremely large compared to $\ell$ (i.e., the setting of few blocks of large length).
In this work, we make significant progress on proving the existence of condensers with strong parameters in almost all parameter regimes, even when $n$ is a large enough constant and $\ell$ is growing. This almost resolves the question of the existence of condensers for oNOSF sources, except when $n$ is a small constant.
As our next result, we construct the first explicit condensers for oNOSF sources and achieve parameters that match the existential results of [CGR, FOCS'24]. We also obtain a much improved construction for transforming low-entropy oNOSF sources (where the good blocks only have min-entropy, as opposed to being uniform) into uniform oNOSF sources.
We find interesting connections and applications of our results on condensers to collective coin flipping and collective sampling, problems that are well-studied in fault-tolerant distributed computing. We use our condensers to provide very simple protocols for these problems.
Finally, to understand the case of small $n$, we focus on $n=1$ which corresponds to online non-oblivious bit-fixing (oNOBF) sources. We introduce and initiate a systematic study of a new, natural notion of the influence of Boolean functions, which we call online influence, and believe is of independent interest. Using tools from Boolean Fourier analysis, we establish tight bounds on the total online influence of Boolean functions, which imply extraction lower bounds. Several problems remain open regarding this new measure of influence; progress on these will lead to improved extractors and condensers for oNOBF sources or further strengthen our lower bounds.Thu, 07 Nov 2024 00:38:49 +0200https://eccc.weizmann.ac.il/report/2024/171TR24-170 | Corners in Quasirandom Groups via Sparse Mixing |
Michael Jaber,
Shachar Lovett,
Anthony Ostuni
https://eccc.weizmann.ac.il/report/2024/170We improve the best known upper bounds on the density of corner-free sets over quasirandom groups from inverse poly-logarithmic to quasi-polynomial. We make similarly substantial improvements to the best known lower bounds on the communication complexity of a large class of permutation functions in the 3-player Number-on-Forehead model. Underpinning both results is a general combinatorial theorem that extends the recent work of Kelley, Lovett, and Meka (STOC’24), itself a development of ideas from the breakthrough result of Kelley and Meka on three-term arithmetic progressions (FOCS’23).Tue, 05 Nov 2024 04:53:56 +0200https://eccc.weizmann.ac.il/report/2024/170TR24-169 | The Randomness Complexity of Differential Privacy |
Clément Canonne,
Francis Su,
Salil Vadhan
https://eccc.weizmann.ac.il/report/2024/169We initiate the study of the *randomness complexity* of differential privacy, i.e., how many random bits an algorithm needs in order to generate accurate differentially private releases. As a test case, we focus on the task of releasing the results of $d$ counting queries, or equivalently all one-way marginals on a $d$-dimensional dataset with boolean attributes. While standard differentially private mechanisms for this task have randomness complexity that grows linearly with $d$, we show that, surprisingly, only $\log_2 d+O(1)$ random bits (in expectation) suffice to achieve an error that depends polynomially on $d$ (and is independent of the size $n$ of the dataset), and furthermore this is possible with pure, unbounded differential privacy and privacy-loss parameter $\varepsilon=1/\mathrm{poly}(d)$. Conversely, we show that at least $\log_2 d-O(1)$ random bits are also necessary for nontrivial accuracy, even with approximate, bounded DP, provided the privacy-loss parameters satisfy $\varepsilon,\delta\leq 1/\mathrm{poly}(d)$. We obtain our results by establishing a close connection between the randomness complexity of differentially private mechanisms and the geometric notion of "secluded partitions" of $\mathbb{R}^d$ recently introduced and studied by Vander Woude et al. (2022, 2023).Mon, 04 Nov 2024 22:57:09 +0200https://eccc.weizmann.ac.il/report/2024/169TR24-168 | Multiplicative Extractors for Samplable Distributions |
Ronen Shaltiel
https://eccc.weizmann.ac.il/report/2024/168Trevisan and Vadhan (FOCS 2000) introduced the notion of (seedless) extractors for samplable distributions as a possible solution to the problem of extracting random keys for cryptographic protocols from weak sources of randomness.
They showed that under a very strong complexity theoretic assumption, there exists a constant $\alpha>0$ such that for every constant $c \ge 1$, there is an extractor $\mathsf{Ext}:\{0,1\}^n \to \{0,1\}^{\Omega(n)}$, such that for every distribution $X$ over $\{0,1\}^n$ that has $H_{\infty}(X) \ge (1-\alpha) \cdot n$, $\mathsf{Ext}(X)$ is $\epsilon$-close to uniform for $\epsilon=\frac{1}{n^c}$, and furthermore, $\mathsf{Ext}$ is computable in time $\poly(n^c)$.
Recently, Ball, Goldin, Dachman-Soled and Mutreja (FOCS 2023) gave a substantial improvement, and achieved the same conclusion under the weaker (and by now standard) assumption that there exists a constant $\beta>0$, and a problem in $\E=\DTIME(2^{O(n)})$ that requires size $2^{\beta n}$ nondeterministic circuits.
\smallskip \noindent
In this paper we give an alternative proof of this result with the following advantages:
\begin{itemize}
\item Our extractors have ``multiplicative error''. More specifically, it is guaranteed that for every event $A \subseteq \{0,1\}^m$, $\Pr[\mathsf{Ext}(X) \in A] \le (1+\epsilon) \cdot \Pr[U_m \in A]$. (This should be contrasted with the standard notion that only implies $\Pr[\mathsf{Ext}(X) \in A] \le \epsilon +\Pr[U_m \in A]$).
Consequently, unlike the extractors of Trevisan and Vadhan, and Ball et al., our multiplicative extractors guarantee that in the application of selecting keys for cryptographic protocols, if when choosing a random key, the probability that an adversary can steal the honest party's money is $n^{-\omega(1)}$, then this also holds when using the output of the extractor as a key.
A related notion of multiplicative extractors was defined by Applebaum, Artemenko, Shaltiel and Yang (CCC 2015) who showed that black-box techniques cannot yield extractors with additive error $\epsilon=n^{-\omega(1)}$, under the assumption assumed by Ball et al. or Trevisan and Vadhan. This motivated Applebaum et al. to consider multiplicative extractors, and they gave constructions based on the original hardness assumption of Trevisan and Vadhan.
\item Our proof is significantly simpler, and more modular than that of Ball et al. (and arguably also than that of Trevisan and Vadhan). A key observation is that the extractors that we want to construct, easily follow from a seed-extending pseudorandom generator against nondeterministic circuits (with the twist that the error is measured multiplicatively, as in computational differential privacy). We then proceed to construct such pseudorandom generators under the hardness assumption. This turns out to be easier (utilizing amongst other things, ideas by Trevisan and Vadhan, and by Ball et al.)
\end{itemize}
Trevisan and Vadhan also asked whether lower bounds against nondeterministic circuits are \emph{necessary} to achieve extractors for samplable distributions. While we cannot answer this question, we show that the proof techniques used in our paper (as well as those used in previous work) produce extractors which imply seed-extending PRGs against nondeterministic circuits, which in turn imply lower bounds against nondeterministic circuits.Sun, 03 Nov 2024 11:49:10 +0200https://eccc.weizmann.ac.il/report/2024/168TR24-167 | Space-bounded quantum interactive proof systems |
Yupan Liu,
François Le Gall,
Qisheng Wang,
Harumichi Nishimura
https://eccc.weizmann.ac.il/report/2024/167We introduce two models of space-bounded quantum interactive proof systems, $\mathbf{QIPL}$ and $\mathbf{QIP_\mathrm{U}L}$. The $\mathbf{QIP_\mathrm{U}L}$ model, a space-bounded variant of quantum interactive proofs ($\mathbf{QIP}$) introduced by Watrous (CC 2003) and Kitaev and Watrous (STOC 2000), restricts verifier actions to unitary circuits. In contrast, $\mathbf{QIPL}$ allows logarithmically many intermediate measurements per verifier action (with a high-concentration condition on yes instances), making it the weakest model that encompasses the classical model of Condon and Ladner (JCSS 1995).
We characterize the computational power of $\mathbf{QIPL}$ and $\mathbf{QIP_\mathrm{U}L}$. When the message number $m$ is polynomially bounded, $\mathbf{QIP_\mathrm{U}L}$ is strictly contained in $\mathbf{QIPL}$ unless $\mathbf{P} = \mathbf{NP}$:
- $\mathbf{QIPL}$ exactly characterizes $\mathbf{NP}$.
- $\mathbf{QIP_\mathbf{U}L}$ is contained in $\mathbf{P}$ and contains $\mathbf{SAC}^1 \cup \mathbf{BQL}$, where $\mathbf{SAC}^1$ denotes problems solvable by classical logarithmic-depth, semi-unbounded fan-in circuits.
However, this distinction vanishes when $m$ is constant. Our results further indicate that intermediate measurements uniquely impact space-bounded quantum interactive proofs, unlike in space-bounded quantum computation, where $\mathbf{BQL}=\mathbf{BQ_\mathrm{U}L}$.
We also introduce space-bounded unitary quantum statistical zero-knowledge ($\mathbf{QSZK_\mathrm{U}L}$), a specific form of $\mathbf{QIP_\mathrm{U}L}$ proof systems with statistical zero-knowledge against any verifier. This class is a space-bounded variant of quantum statistical zero-knowledge ($\mathbf{QSZK}$) defined by Watrous (SICOMP 2009). We prove that $\mathbf{QSZK_\mathrm{U}L} = \mathbf{BQL}$, implying that the statistical zero-knowledge property negates the computational advantage typically gained from the interaction.Thu, 31 Oct 2024 09:06:27 +0200https://eccc.weizmann.ac.il/report/2024/167TR24-166 | Commuting Local Hamiltonians Beyond 2D |
John Bostanci,
Yeongwoo Hwang
https://eccc.weizmann.ac.il/report/2024/166Commuting local Hamiltonians provide a testing ground for studying many of the most interesting open questions in quantum information theory, including the quantum PCP conjecture and the existence of area laws. Although they are a simplified model of quantum computation, the status of the commuting local Hamiltonian problem remains largely unknown. A number of works have shown that increasingly expressive families of commuting local Hamiltonians admit completely classical verifiers. Despite intense work, the largest class of commuting local Hamiltonians we can place in NP are those on a square lattice, where each lattice site is a qutrit. Even worse, many of the techniques used to analyze these problems rely heavily on the geometry of the square lattice and the properties of the numbers 2 and 3 as local dimensions. In this work, we present a new technique to analyze the complexity of various families of commuting local Hamiltonians: guided reductions. Intuitively, these are a generalization of typical reduction where the prover provides a guide so that the verifier can construct a simpler Hamiltonian. The core of our reduction is a new rounding technique based on a combination of Jordan's Lemma and the Structure Lemma. Our rounding technique is much more flexible than previous work, and allows us to show that a larger family of commuting local Hamiltonians is in NP, albiet with the restriction that all terms are rank-1. Specifically, we prove the following two results:
1. Commuting local Hamiltonians in 2D that are rank-1 are contained in NP, independent of the qudit dimension. Note that this family of commuting local Hamiltonians has no restriction on the local dimension or the locality.
2. We prove that rank-1, 3D commuting Hamiltonians with qudits on edges are in NP. To our knowledge this is the first time a family of 3D commuting local Hamiltonians has been contained in NP. Tue, 29 Oct 2024 08:31:09 +0200https://eccc.weizmann.ac.il/report/2024/166TR24-165 | Online Condensing of Unpredictable Sources via Random Walks |
Dean Doron,
Dana Moshkovitz,
David Zuckerman,
Justin Oh
https://eccc.weizmann.ac.il/report/2024/165A natural model of a source of randomness consists of a long stream of symbols $X = X_1\circ\ldots\circ X_t$, with some guarantee on the entropy of $X_i$ conditioned on the outcome of the prefix $x_1,\dots,x_{i-1}$. We study unpredictable sources, a generalization of the almost Chor--Goldreich (CG) sources considered in [DMOZ23]. In an unpredictable source $X$, for a typical draw of $x\sim X$, for most $i$, $x_i$ has a low probability of occurring given $x_1,\dots,x_{i-1}$. Such a model relaxes the unrealistic assumption of a CG source that for every $i$, and every $x_1,\dots,x_{i-1}$, the next symbol $X_i$ has sufficiently large entropy.
Unpredictable sources subsume all previously considered notions of almost CG sources, including those for which it was unknown whether random walks using $X$ mix, and including those that are equivalent to general sources with high min entropy.
We prove that random walks using unpredictable sources do mix, and as a result obtain seeded online condensers with constant entropy gap, and (seedless) deterministic condensers outputting a constant fraction of the entropy. In particular, our condensers run in space comparable to the total entropy of the stream $X$, even when its length is not known ahead of time. As another corollary, we obtain a new extractor based on expander random walks handling lower entropy than the classic expander based construction relying on spectral techniques [Gil98].
As our main technical tool, we provide a novel analysis covering a key case of adversarial random walks on lossless expanders that [DMOZ23] fails to address. As part of the analysis, we provide a ``chain rule for vertex probabilities''. The standard chain rule states that for every $x \sim X$ and $i$, $\Pr(x_1,\dots,x_i)= \Pr[X_i = x_i|X_{[1,i-1]}=x_1,\dots,x_{i-1}] \cdot \Pr(x_1,\dots,x_{i-1})$. If $W(x_1,\dots,x_i)$ is the vertex reached using $x_1,\dots,x_i$, then the chain rule for vertex probabilities essentially states that the same phenomena occurs for a typical $x$:
\[\Pr [V_i = W(x_1,\dots,x_i)] \leq \Pr[X_i = x_i|X_{[1,i-1]}=x_1,\dots,x_{i-1}] \cdot \Pr[V_{i-1} = W(x_1,\dots,x_{i-1})],\]
where $V_i$ is the vertex distribution of the random walk at step $i$ using $X$.Sun, 27 Oct 2024 14:28:32 +0200https://eccc.weizmann.ac.il/report/2024/165TR24-164 | Low Degree Local Correction Over the Boolean Cube |
Prashanth Amireddy,
Amik Raj Behera,
Manaswi Paraashar,
Srikanth Srinivasan,
Madhu Sudan
https://eccc.weizmann.ac.il/report/2024/164In this work, we show that the class of multivariate degree-$d$ polynomials mapping $\{0,1\}^{n}$ to any Abelian group $G$ is locally correctable with $\widetilde{O}_{d}((\log n)^{d})$ queries for up to a fraction of errors approaching half the minimum distance of the underlying code. In particular, this result holds even for polynomials over the reals or the rationals, special cases that were previously not known. Further, we show that they are locally list correctable up to a fraction of errors approaching the minimum distance of the code. These results build on and extend the prior work of Amireddy, Behera, Paraashar, Srinivasan, and Sudan [ABPSS24] (STOC 2024) who considered the case of linear polynomials ($d=1$) and gave analogous results.
Low-degree polynomials over the Boolean cube $\{0,1\}^{n}$ arise naturally in Boolean circuit complexity and learning theory, and our work furthers the study of their coding-theoretic properties. Extending the results of [ABPSS24] from linear polynomials to higher-degree polynomials involves several new challenges and handling them gives us further insights into properties of low-degree polynomials over the Boolean cube. For local correction, we construct a set of points in the Boolean cube that lie between two exponentially close parallel hyperplanes and is moreover an interpolating set for degree-$d$ polynomials. To show that the class of degree-$d$ polynomials is list decodable up to the minimum distance, we stitch together results on anti-concentration of low-degree polynomials, the Sunflower lemma, and the Footprint bound for counting common zeroes of polynomials. Analyzing the local list corrector of [ABPSS24] for higher degree polynomials involves understanding random restrictions of non-zero degree-$d$ polynomials on a Hamming slice. In particular, we show that a simple random restriction process for reducing the dimension of the Boolean cube is a suitably good sampler for Hamming slices. Thus our exploration unearths several new techniques that are useful in understanding the combinatorial structure of low-degree polynomials over $\{0,1\}^{n}$.Fri, 25 Oct 2024 01:31:41 +0300https://eccc.weizmann.ac.il/report/2024/164TR24-163 | On defining PPT-search problems and PPT-optimization problems |
Roei Tell
https://eccc.weizmann.ac.il/report/2024/163This note revisits the study of search problems that are solvable in probabilistic polynomial time. Previously, Goldreich (2011) introduced a class called ``$\mathcal{BPP}$-search'', and studied search-to-decision reductions for problems in this class.
In Goldreich's original formulation, the definition of what counts as ``successfully solving'' a $\mathcal{BPP}$-search problem is implicit, and this opens the door to multiple possible interpretations. We suggest two approaches:
1. We propose *our preferred definition* for the notion of solving $\mathcal{BPP}$-search problems. The resulting class of search problems is useful, since it captures natural problems of interest, and since it has valuable technical properties. However, the a-priori rationale for studying the class is not fully satisfying.
2. We propose an *alternative formulation* of the class of search problems solvable in ppt, which is focused on optimization (i.e., finding a solution of best quality). The resulting class has a clearer meaning, and it turns out to be *computationally equivalent* to the class resulting from our aforementioned definition of $\mathcal{BPP}$-search (under deterministic polynomial-time reductions).
The optimization-based definition seems cleaner and more appealing. However, since the two classes above are computationally equivalent, it can be useful to use whichever definition is technically more convenient in the relevant context.
*On the companion paper.* This paper presents my perspective on a joint project conducted together with Oded Goldreich, whereas Oded’s perspective is presented in the companion paper (Goldreich, 2024). The technical contents of both papers has significant overlap, and both papers propose the same definition that is referred to in the current text as $\mathcal{PPT}$-optimization (i.e., Definition 5). Thu, 24 Oct 2024 16:49:13 +0300https://eccc.weizmann.ac.il/report/2024/163TR24-162 | Computationally Hard Problems Are Hard for QBF Proof Systems Too |
Agnes Schleitzer,
Olaf Beyersdorff
https://eccc.weizmann.ac.il/report/2024/162There has been tremendous progress in the past decade in the field of quantified Boolean formulas (QBF), both in practical solving as well as in creating a theory of corresponding proof systems and their proof complexity analysis. Both for solving and for proof complexity, it is important to have interesting formula families on which we can test solvers and gauge the strength of the proof systems. There are currently few such formula families in the literature.
We initiate a general programme on how to transform computationally hard problems (located in the polynomial hierarchy) into QBFs hard for the main QBF resolution systems Q-Res and QU-Res that relate to core QBF solvers. We illustrate this general approach on three problems from graph theory and logic. This yields QBF families that are provably hard for Q-Res and QU-Res (without any complexity assumptions). Thu, 24 Oct 2024 16:37:02 +0300https://eccc.weizmann.ac.il/report/2024/162