Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #4 to TR24-097 | 16th November 2024 21:35

Near-Optimal Averaging Samplers and Matrix Samplers

RSS-Feed




Revision #4
Authors: Zhiyang Xun, David Zuckerman
Accepted on: 16th November 2024 21:35
Downloads: 13
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta 0$, our sampler uses $m + O(\log (1 / \delta))$ random bits to output $t = O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{1 + \alpha})$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$, \[ \Pr\left[\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta. \] The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the $O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{\alpha})$ factor.
Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that $f: \{0, 1\}^m \to \mathbb{C}^{d \times d}$ and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity $m + \tilde O (\log(d / \delta))$ and sample complexity $O((\frac{1}{\varepsilon^2} \log \frac{d}{\delta})^{1 + \alpha})$ for any constant $\alpha > 0$, both near-optimal with only a logarithmic factor in randomness complexity and an additional $\alpha$ exponent on the sample complexity.
We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.



Changes to previous version:

Added new results


Revision #3 to TR24-097 | 16th November 2024 21:16

Near-Optimal Averaging Samplers and Matrix Samplers





Revision #3
Authors: Zhiyang Xun, David Zuckerman
Accepted on: 16th November 2024 21:16
Downloads: 3
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta 0$, our sampler uses $m + O(\log (1 / \delta))$ random bits to output $t = O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{1 + \alpha})$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$,
\[
\Pr\left[\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta.
\]
The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the $O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{\alpha})$ factor.

Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that $f: \{0, 1\}^m \to \mathbb{C}^{d \times d}$ and the absolute value is replaced by the spectral norm.
Our matrix sampler achieves randomness complexity $m + \widetilde O (\log(d / \delta))$ and sample complexity $ O((\frac{1}{\varepsilon^2} \log \frac{d}{\delta})^{1 + \alpha})$ for any constant $\alpha > 0$, both near-optimal with only a logarithmic factor in randomness complexity and an additional $\alpha$ exponent on the sample complexity.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.



Changes to previous version:

Added new results


Revision #2 to TR24-097 | 11th July 2024 11:28

Near-Optimal Averaging Samplers





Revision #2
Authors: Zhiyang Xun, David Zuckerman
Accepted on: 11th July 2024 11:28
Downloads: 166
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta, \varepsilon > 0$ and any constant $\alpha > 0$, our sampler uses $O(m + \log (1 / \delta))$ random bits to output $t = O(\left(\frac{1}{\varepsilon^2} \log \frac{1}{\delta}\right)^{1 + \alpha})$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$,
\[
\Pr\left[\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta.
\]
The sample complexity is optimal up to the $O\left(\left(\frac{1}{\varepsilon^2} \log \frac{1}{\delta}\right)^{\alpha}\right)$ factor, and the randomness complexity is optimal up to a constant factor.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor bigger than 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.


Revision #1 to TR24-097 | 11th July 2024 11:19

Near-Optimal Averaging Samplers





Revision #1
Authors: Zhiyang Xun, David Zuckerman
Accepted on: 11th July 2024 11:19
Downloads: 78
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta, \epsilon > 0$ and any constant $\alpha > 0$, our sampler uses $O(m + \log (1 / \delta))$ random bits to output $t = O\left(\left(\frac{1}{\epsilon^2} \log \frac{1}{\delta}\right)^{1 + \alpha}\right)$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$,
\[
\Pr\left(\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \epsilon\right) \geq 1 - \delta.
\]
The sample complexity is optimal up to the $O\left(\left(\frac{1}{\epsilon^2} \log \frac{1}{\delta}\right)^{\alpha}\right)$ factor, and the randomness complexity is optimal up to a constant factor.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor bigger than 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.


Paper:

TR24-097 | 31st May 2024 20:31

Near-Optimal Averaging Samplers





TR24-097
Authors: Zhiyang Xun, David Zuckerman
Publication: 31st May 2024 20:37
Downloads: 411
Keywords: 


Abstract:

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity for natural parameter choices. Specifically, for any constant $\alpha > 0$, for $\delta > 2^{-\mathrm{poly}(1 / \varepsilon)}$, it uses $m + O(\log (1 / \delta))$ random bits to output $t = O(\log(1 / \delta) / \varepsilon^{2 + \alpha})$ samples $Z_1, \dots Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$, \[
\Pr\left[\left|\frac{1}{t}\sum_{i=1}^tf(Z_i) - \mathbb{E} \, f\right| \le \varepsilon \right] \ge 1 - \delta.
\]
The sample complexity is optimal up to the $O(\varepsilon^\alpha)$ factor.

We use known connections with randomness extractors and list-decodable codes to give applications to these objects.



ISSN 1433-8092 | Imprint