Revision #1 Authors: Jayadev Acharya, Clement Canonne, Gautam Kamath

Accepted on: 19th April 2015 01:41

Downloads: 445

Keywords:

A recent model for property testing of probability distributions enables tremendous savings in the sample complexity of testing algorithms, by allowing them to condition the sampling on subsets of the domain.

In particular, Canonne, Ron, and Servedio showed that, in this setting, testing identity of an unknown distribution $D$ (i.e., whether $D=D^*$ for an explicitly known $D^*$) can be done with a constant number of samples, independent of the support size $n$ -- in contrast to the required $\sqrt{n}$ in the standard sampling model. However, it was unclear whether the same held for the case of testing equivalence, where both distributions are unknown. Indeed, while Canonne, Ron, and Servedio established a $\mathrm{poly}\log(n)$-query upper bound for equivalence testing, very recently brought down to $\tilde O(\log\log n)$ by Falahatgar et al., whether a dependence on the domain size $n$ is necessary was still open, and explicitly posed by Fischer at the Bertinoro Workshop on Sublinear Algorithms. In this work, we answer the question in the positive, showing that any testing algorithm for equivalence must make $\Omega(\sqrt{\log\log n})$ queries in the conditional sampling model. Interestingly, this demonstrates an intrinsic qualitative gap between identity and equivalence testing, absent in the standard sampling model (where both problems have sampling complexity $n^{\Theta(1)}$).

Turning to another question, we investigate the complexity of support size estimation. We provide a doubly-logarithmic upper bound for the adaptive version of this problem, generalizing work of Ron and Tsur to our weaker model. We also establish a logarithmic lower bound for the non-adaptive version of this problem. This latter result carries on to the related problem of non-adaptive uniformity testing, an exponential improvement over previous results that resolves an open question of Chakraborty, Fischer, Goldhirsh, and Matsliah.

Results and presentation improved.

TR14-156 Authors: Jayadev Acharya, Clement Canonne, Gautam Kamath

Publication: 26th November 2014 21:08

Downloads: 746

Keywords:

A recent model for property testing of probability distributions enables tremendous savings in the sample complexity of testing algorithms, by allowing them to condition the sampling on subsets of the domain.

In particular, Canonne et al. showed that, in this setting, testing identity of an unknown distribution $D$ (i.e., whether $D=D^\ast$ for an explicitly known $D^\ast$) can be done with a constant number of samples, independent of the support size $n$ -- in contrast to the required $\sqrt{n}$ in the standard sampling model. However, it was unclear whether the same held for the case of testing equivalence, where both distributions are unknown. Indeed, while the best known upper bound for equivalence testing is ${\rm polylog}(n)$, whether a dependence on the domain size $n$ is necessary was still open, and explicitly posed at the Bertinoro Workshop on Sublinear Algorithms. In this work, we answer the question in the positive, showing that any testing algorithm for equivalence must make $\Omega(\sqrt{\log\log n})$ queries in the conditional sampling model. Interestingly, this demonstrates an intrinsic qualitative gap between identity and equivalence testing, absent in the standard sampling model (where both problems have sampling complexity $n^{\Theta(1)}$).

Turning to another question, we strengthen a result of Ron and Tsur on support size estimation in the conditional sampling model, with an algorithm to approximate the support size of an arbitrary distribution. This result matches the previously known upper bound in the restricted case where the distribution is guaranteed to be uniform over a subset. Furthermore, we settle a related open problem of theirs, proving tight lower bounds on support size estimation with non-adaptive queries.