Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #2 to TR24-097 | 11th July 2024 11:28

Near-Optimal Averaging Samplers

RSS-Feed




Revision #2
Authors: Zhiyang Xun, David Zuckerman
Accepted on: 11th July 2024 11:28
Downloads: 34
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta, \varepsilon > 0$ and any constant $\alpha > 0$, our sampler uses $O(m + \log (1 / \delta))$ random bits to output $t = O(\left(\frac{1}{\varepsilon^2} \log \frac{1}{\delta}\right)^{1 + \alpha})$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$,
\[
\Pr\left[\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta.
\]
The sample complexity is optimal up to the $O\left(\left(\frac{1}{\varepsilon^2} \log \frac{1}{\delta}\right)^{\alpha}\right)$ factor, and the randomness complexity is optimal up to a constant factor.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor bigger than 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.


Revision #1 to TR24-097 | 11th July 2024 11:19

Near-Optimal Averaging Samplers





Revision #1
Authors: Zhiyang Xun, David Zuckerman
Accepted on: 11th July 2024 11:19
Downloads: 15
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta, \epsilon > 0$ and any constant $\alpha > 0$, our sampler uses $O(m + \log (1 / \delta))$ random bits to output $t = O\left(\left(\frac{1}{\epsilon^2} \log \frac{1}{\delta}\right)^{1 + \alpha}\right)$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$,
\[
\Pr\left(\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \epsilon\right) \geq 1 - \delta.
\]
The sample complexity is optimal up to the $O\left(\left(\frac{1}{\epsilon^2} \log \frac{1}{\delta}\right)^{\alpha}\right)$ factor, and the randomness complexity is optimal up to a constant factor.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor bigger than 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.


Paper:

TR24-097 | 31st May 2024 20:31

Near-Optimal Averaging Samplers





TR24-097
Authors: Zhiyang Xun, David Zuckerman
Publication: 31st May 2024 20:37
Downloads: 311
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity for natural parameter choices. Specifically, for any constant $\alpha > 0$, for $\delta > 2^{-\mathrm{poly}(1 / \varepsilon)}$, it uses $m + O(\log (1 / \delta))$ random bits to output $t = O(\log(1 / \delta) / \varepsilon^{2 + \alpha})$ samples $Z_1, \dots Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$, \[
\Pr\left[\left|\frac{1}{t}\sum_{i=1}^tf(Z_i) - \mathbb{E} \, f\right| \le \varepsilon \right] \ge 1 - \delta.
\]
The sample complexity is optimal up to the $O(\varepsilon^\alpha)$ factor.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects.



ISSN 1433-8092 | Imprint