Revision #2 Authors: Jayadev Acharya, Clement Canonne, Gautam Kamath

Accepted on: 7th December 2018 03:28

Downloads: 688

Keywords:

A recent model for property testing of probability

distributions (Chakraborty et al., ITCS 2013, Canonne et al., SICOMP 2015)

enables tremendous savings in the sample complexity of testing algorithms,

by allowing them to condition the sampling on subsets of the domain.

In particular, Canonne, Ron, and Servedio

(SICOMP 2015)

showed that, in this setting, testing identity of an unknown distribution $D$

(whether $D=D^\ast$ for an explicitly known $D^\ast$) can be done

with a constant number of queries, independent

of the support size $n$ -- in contrast to the required $\Omega(\sqrt{n})$

in the standard sampling model. It was unclear whether the same

stark contrast exists

for the case of testing equivalence,

where both distributions are unknown.

While Canonne et al. established a $\mathrm{poly}(\log n)$-query upper bound for equivalence testing,

very recently brought down to $\tilde O(\log\log n)$ by Falahatgar et al. (COLT 2015),

whether a dependence on the domain size $n$ is necessary was still open,

and explicitly posed by Fischer at the Bertinoro Workshop on

Sublinear Algorithms (2014).

We show that any testing algorithm for equivalence must make $\Omega(\sqrt{\log\log n})$ queries in the conditional sampling model.

This demonstrates a gap between identity and equivalence testing, absent in the standard sampling model (where both problems have sampling complexity $n^{\Theta(1)}$).

We also obtain results on the query complexity of uniformity testing and support-size estimation with conditional samples. We answer a question of Chakraborty et al. (ITCS 2013) showing that non-adaptive uniformity testing indeed requires $\Omega(\log n)$ queries in the conditional model. For the related problem of support-size estimation, we provide both adaptive and

non-adaptive algorithms, with query complexities $\mathrm{poly}(\log\log n)$ and $\mathrm{poly}(\log n)$, respectively.

Presentation improved.

Revision #1 Authors: Jayadev Acharya, Clement Canonne, Gautam Kamath

Accepted on: 19th April 2015 01:41

Downloads: 1096

Keywords:

A recent model for property testing of probability distributions enables tremendous savings in the sample complexity of testing algorithms, by allowing them to condition the sampling on subsets of the domain.

In particular, Canonne, Ron, and Servedio showed that, in this setting, testing identity of an unknown distribution $D$ (i.e., whether $D=D^*$ for an explicitly known $D^*$) can be done with a constant number of samples, independent of the support size $n$ -- in contrast to the required $\sqrt{n}$ in the standard sampling model. However, it was unclear whether the same held for the case of testing equivalence, where both distributions are unknown. Indeed, while Canonne, Ron, and Servedio established a $\mathrm{poly}\log(n)$-query upper bound for equivalence testing, very recently brought down to $\tilde O(\log\log n)$ by Falahatgar et al., whether a dependence on the domain size $n$ is necessary was still open, and explicitly posed by Fischer at the Bertinoro Workshop on Sublinear Algorithms. In this work, we answer the question in the positive, showing that any testing algorithm for equivalence must make $\Omega(\sqrt{\log\log n})$ queries in the conditional sampling model. Interestingly, this demonstrates an intrinsic qualitative gap between identity and equivalence testing, absent in the standard sampling model (where both problems have sampling complexity $n^{\Theta(1)}$).

Turning to another question, we investigate the complexity of support size estimation. We provide a doubly-logarithmic upper bound for the adaptive version of this problem, generalizing work of Ron and Tsur to our weaker model. We also establish a logarithmic lower bound for the non-adaptive version of this problem. This latter result carries on to the related problem of non-adaptive uniformity testing, an exponential improvement over previous results that resolves an open question of Chakraborty, Fischer, Goldhirsh, and Matsliah.

Results and presentation improved.

TR14-156 Authors: Jayadev Acharya, Clement Canonne, Gautam Kamath

Publication: 26th November 2014 21:08

Downloads: 1496

Keywords:

A recent model for property testing of probability distributions enables tremendous savings in the sample complexity of testing algorithms, by allowing them to condition the sampling on subsets of the domain.

In particular, Canonne et al. showed that, in this setting, testing identity of an unknown distribution $D$ (i.e., whether $D=D^\ast$ for an explicitly known $D^\ast$) can be done with a constant number of samples, independent of the support size $n$ -- in contrast to the required $\sqrt{n}$ in the standard sampling model. However, it was unclear whether the same held for the case of testing equivalence, where both distributions are unknown. Indeed, while the best known upper bound for equivalence testing is ${\rm polylog}(n)$, whether a dependence on the domain size $n$ is necessary was still open, and explicitly posed at the Bertinoro Workshop on Sublinear Algorithms. In this work, we answer the question in the positive, showing that any testing algorithm for equivalence must make $\Omega(\sqrt{\log\log n})$ queries in the conditional sampling model. Interestingly, this demonstrates an intrinsic qualitative gap between identity and equivalence testing, absent in the standard sampling model (where both problems have sampling complexity $n^{\Theta(1)}$).

Turning to another question, we strengthen a result of Ron and Tsur on support size estimation in the conditional sampling model, with an algorithm to approximate the support size of an arbitrary distribution. This result matches the previously known upper bound in the restricted case where the distribution is guaranteed to be uniform over a subset. Furthermore, we settle a related open problem of theirs, proving tight lower bounds on support size estimation with non-adaptive queries.