Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR19-183 | 9th January 2020 17:15

Randomness Extraction from Somewhat Dependent Sources

RSS-Feed




Revision #1
Authors: Marshall Ball, Oded Goldreich, Tal Malkin
Accepted on: 9th January 2020 17:15
Downloads: 17
Keywords: 


Abstract:

We initiate a comprehensive study of the question of randomness extractions from two somewhat dependent sources of defective randomness.
Specifically, we present three natural models, which are based on different natural perspectives on the notion of bounded dependency between a pair of distributions.
Going from the more restricted model to the less restricted one, our models and main results are as follows.

\begin{enumerate}
\item{\em Bounded dependence as bounded coordination}:
Here we consider pairs of distributions that arise from independent random processes that are applied to the outcome of a single global random source, which may be viewed as a mechanism of coordination (which is adversarial from our perspective).

We show that if the min-entropy of each of the two outcomes is larger than the length of the global source, then extraction is possible (and is, in fact, feasible). We stress that the extractor has no access to the global random source nor to the internal randomness that the two processes use, but rather gets only the two dependent outcomes.

This model is equivalent to a setting in which the two outcomes are generated by two independent sources, but then each outcome is modified based on limited leakage (equiv., communication) between the two sources. (Here this leakage is measured in terms of the number of bits that were communicated, but in the next model we consider the actual influence of this leakage.)

\item{\em Bounded dependence as bounded cross influence}:
Here we consider pairs of outcomes that are produced by a pair of sources such that each source has bounded (worst-case) influence on the outcome of the other source. We stress that the extractor has no access to the randomness that the two processes use, but rather gets only the two dependent outcomes.

We show that, while (proper) randomness extraction is impossible in this case, randomness condensing is possible and feasible; specifically, the randomness deficiency of condensing is linear in our measure of cross influence, and this upper-bound is tight. We also discuss various applications of such condensers, including for cryptography, standard randomized algorithms, and sublinear-time algorithms, while pointing out their benefit over using a seeded (single-source) extractor.

\item{\em Bounded dependence as bounded mutual information}:
Due to the average-case nature of this mutual information, here there is a trade-off between the error (or deviation) probability and the randomness deficiency. Loosely speaking, for joint distributions of mutual information $t$, we can condense with randomness deficiency $O(t/\epsilon)$ and error $\epsilon$, and this trade-off is optimal.
\end{enumerate}
All positive results are obtained by using a standard two-source extractor (or condenser) as a black-box.



Changes to previous version:

Mainly, a new Section 4.3.3, which contains a converse of Proposition 4.2.


Paper:

TR19-183 | 21st December 2019 01:18

Randomness Extraction from Somewhat Dependent Sources





TR19-183
Authors: Marshall Ball, Oded Goldreich, Tal Malkin
Publication: 21st December 2019 01:18
Downloads: 131
Keywords: 


Abstract:

We initiate a comprehensive study of the question of randomness extractions from two somewhat dependent sources of defective randomness.
Specifically, we present three natural models, which are based on different natural perspectives on the notion of bounded dependency between a pair of distributions.
Going from the more restricted model to the less restricted one, our models and main results are as follows.

\begin{enumerate}
\item{\em Bounded dependence as bounded coordination}:
Here we consider pairs of distributions that arise from independent random processes that are applied to the outcome of a single global random source, which may be viewed as a mechanism of coordination (which is adversarial from our perspective).

We show that if the min-entropy of each of the two outcomes is larger than the length of the global source, then extraction is possible (and is, in fact, feasible). We stress that the extractor has no access to the global random source nor to the internal randomness that the two processes use, but rather gets only the two dependent outcomes.

This model is equivalent to a setting in which the two outcomes are generated by two independent sources, but then each outcome is modified based on limited leakage (equiv., communication) between the two sources. (Here this leakage is measured in terms of the number of bits that were communicated, but in the next model we consider the actual influence of this leakage.)

\item{\em Bounded dependence as bounded cross influence}:
Here we consider pairs of outcomes that are produced by a pair of sources such that each source has bounded (worst-case) influence on the outcome of the other source. We stress that the extractor has no access to the randomness that the two processes use, but rather gets only the two dependent outcomes.

We show that, while (proper) randomness extraction is impossible in this case, randomness condensing is possible and feasible; specifically, the randomness deficiency of condensing is linear in our measure of cross influence, and this upper-bound is tight. We also discuss various applications of such condensers, including for cryptography, standard randomized algorithms, and sublinear-time algorithms, while pointing out their benefit over using a seeded (single-source) extractor.

\item{\em Bounded dependence as bounded mutual information}:
Due to the average-case nature of this mutual information, here there is a trade-off between the error (or deviation) probability and the randomness deficiency. Loosely speaking, for joint distributions of mutual information $t$, we can condense with randomness deficiency $O(t/\epsilon)$ and error $\epsilon$, and this trade-off is optimal.
\end{enumerate}
All positive results are obtained by using a standard two-source extractor (or condenser) as a black-box.



ISSN 1433-8092 | Imprint