Determining the randomized (or distributional) communication complexity of disjointness is a central problem in communication complexity, having roots in the foundational work of Babai, Frankl, and Simon in the 1980s and culminating in the famous works of Kalyanasundaram-Schnitger and Razborov in 1992. However, the question of obtaining tight bounds for product distributions persisted until the more recent work of Bottesch, Gavinsky, and Klauck resolved it. In this note we revisit this classical problem and give a short, streamlined proof of the best bounds, with improved quantitative dependence on the error parameter.
Our approach is based on a simple combinatorial lemma that may be of independent interest: if two sets drawn independently from two distributions are disjoint with non-negligible probability, then one can extract two subfamilies of reasonably large measure that are fully cross-disjoint (equivalently, a large monochromatic rectangle for disjointness).