We prove anti-concentration bounds for the inner product of two independent random vectors. For example, we show that if $A,B$ are subsets of the cube $\{\pm 1\}^n$ with $|A| \cdot |B| \geq 2^{1.01 n}$, and $X \in A$ and $Y \in B$ are sampled independently and uniformly, then the inner product $\langle X, Y\rangle$ takes on any fixed value with probability at most $O(\tfrac{1}{\sqrt{n}})$. Extending Hal\'asz work, we prove stronger bounds when the choices for $x$ are unstructured. We also describe applications to communication complexity, randomness extraction and additive combinatorics.
More general scenarios.
We prove an anti-concentration bound for the inner product of two independent random vectors in the discrete cube. If, for example, $A,B$ are subsets of the cube $\{\pm 1\}^n$ with $|A| \cdot |B| \geq 2^{1.01 n}$, and $X \in A$ and $Y \in B$ are sampled independently and uniformly, then $\su _i X_i Y_i$ takes on any fixed value with probability at most $O(\frac{1}{\sqrt{n}})$. The proof provides a general framework for establishing anti-concentration in discrete domains. We describe applications to communication complexity, randomness extraction and additive combinatorics.
We prove an anti-concentration bound for the inner product of two independent random vectors in the discrete cube. If, for example, $A,B$ are subsets of the cube $\{\pm 1\}^n$ with $|A| \cdot |B| \geq 2^{1.01 n}$, and $X \in A$ and $Y \in B$ are sampled independently and uniformly,
then $\sum_i X_i Y_i$ takes on any fixed value with probability at most $O(1/{\sqrt{n}})$. The proof provides a general framework for establishing anti-concentration in discrete domains. We describe applications to communication complexity, randomness extraction and additive combinatorics.
A sharp bound on the number of directions where anti-concentration does not hold.
We prove an anti-concentration bound for the inner product of two independent random vectors in the discrete cube. If, for example, $A,B$ are subsets of the cube $\{\pm 1\}^n$ with $|A| \cdot |B| \geq 2^{1.01 n}$, and $X \in A$ and $Y \in B$ are sampled independently and uniformly, then $\ip{X}{Y}$ takes on any fixed value with probability at most $O(\tfrac{1}{\sqrt{n}})$. The proof provides a general framework for establishing anti-concentration in discrete domains. We describe applications to communication complexity, randomness extraction and additive combinatorics.
A sharp bound on the number of directions where anti-concentration does not hold.
We prove anti-concentration for the inner product of two independent random vectors in the discrete cube. Our results imply Chakrabarti and Regev's lower bound on the randomized communication complexity of the gap-hamming problem. They are also meaningful in the context of randomness extraction. The proof provides a framework for establishing anti-concentration in discrete domains. The argument has two different components. A local component that uses harmonic analysis, and a global ("information theoretic") component.
Conversations with Anup Rao lead to a sharp bound with a simpler proof. Some calculations from the previous version may be useful elsewhere.
We prove anti-concentration for the inner product of two independent random vectors in the discrete cube. Our results imply Chakrabarti and Regev's lower bound on the randomized communication complexity of the gap-hamming problem. They are also meaningful in the context of randomness extraction. The proof provides a framework for establishing anti-concentration in discrete domains. The argument has two different components. A local component that uses harmonic analysis, and a global ("information theoretic") component.