We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution $D$ that is $\delta$-dense in a distribution that is $\epsilon'$-indistinguishable from uniform, there exists a ``dense model'' for $D$, that is, a distribution that is $\delta$-dense in the uniform distribution and is $\epsilon$-indistinguishable from $D$. This $\epsilon$-indistinguishability is with respect to an arbitrary small class of functions $F$. For the natural case where $\epsilon'\ge\Omega(\epsilon\delta)$ and $\epsilon\ge\delta^{O(1)}$, our lower bound implies that $\Omega\big(\sqrt{(1/\epsilon)\log(1/\delta)}\cdot\log|F|\big)$ advice bits are necessary. There is only a polynomial gap between our lower bound and the best upper bound for this case (due to Zhang), which is $O\big((1/\epsilon^2)\log(1/\delta)\cdot\log|F|\big)$. Our lower bound can be viewed as an analog of list size lower bounds for list-decoding of error-correcting codes, but for ``dense model decoding'' instead.
We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution $D$ that is $\delta$-dense in a distribution that is $\epsilon'$-indistinguishable from uniform, there exists a ``dense model'' for $D$, that is, a distribution that is $\delta$-dense in the uniform distribution and is $\epsilon$-indistinguishable from $D$. This $\epsilon$-indistinguishability is with respect to an arbitrary small class of functions $F$. For the very natural case where $\epsilon'\ge\Omega(\epsilon\delta)$ and $\epsilon\ge\delta^{O(1)}$, our lower bound implies that $\Omega\big(\sqrt{(1/\epsilon)\log(1/\delta)}\cdot\log|F|\big)$ advice bits are necessary. There is only a polynomial gap between our lower bound and the best upper bound for this case (due to Zhang), which is $O\big((1/\epsilon^2)\log(1/\delta)\cdot\log|F|\big)$. Our lower bound can be viewed as an analog of list size lower bounds for list-decoding of error-correcting codes, but for ``dense model decoding'' instead.