For a class $\mathcal{F}$ of formulas (general de Morgan or
read-once de Morgan), the \emph{shrinkage exponent} $\Gamma_{\mathcal{F}}$ is
the parameter measuring the reduction in size of a formula
$F\in\mathcal{F}$ after $F$ is hit with a random restriction. A Boolean function $f\colon \{0,1\}^n\to\{1,-1\}$
is \emph{Fourier-concentrated} if, when viewed in the
Fourier basis, $f$ has most of its total mass on ``low-degree''
coefficients.
We show a direct connection between the two notions by proving that
\emph{shrinkage implies Fourier concentration}: for a shrinkage
exponent $\Gamma_{\mathcal{F}}$, a formula $F\in\mathcal{F}$ of size
$s$ will have most of its Fourier mass on the coefficients of degree
up to about $s^{1/\Gamma_{\mathcal{F}}}$. More precisely,
for a Boolean
function $f\colon\{0,1\}^n\to\{1,-1\}$ computable by a formula
of (large enough) size $s$ and for any parameter $r>0$,
\[
\sum_{A\subseteq [n]\; :\; |A|\geq s^{1/\Gamma}\cdot r} \hat{f}(A)^2 \leq s\cdot\polylog(s)\cdot
exp\left(-\frac{r^{\frac{\Gamma}{\Gamma-1}}}{s^{o(1)}}\right),
\]
where $\Gamma$ is the shrinkage exponent for the corresponding class
of formulas: $\Gamma=2$ for de Morgan formulas, and
$\Gamma=1/\log_2(\sqrt{5}-1)\approx 3.27$ for read-once de Morgan
formulas. This Fourier concentration result is optimal, to within the $o(1)$ term in the exponent of $s$.
As a standard application of these Fourier concentration results,
we get that subquadratic-size de Morgan formulas have negligible correlation with parity.
We also show the tight $\Theta(s^{1/\Gamma})$ bound on
the average sensitivity of read-once formulas of size $s$, which
mirrors the known tight bound $\Theta(\sqrt{s})$ on the average
sensitivity of general de Morgan $s$-size formulas.
Strengthened the main result (Fourier concentration for general and read-once de Morgan formulas) to be almost optimal, added matching lower bounds, corrected the references.
For Boolean functions computed by de Morgan formulas of subquadratic size or read-once de Morgan formulas, we prove a sharp concentration of the Fourier mass on ``small-degree'' coefficients. For a Boolean function $f:\{0,1\}^n\to\{1,-1\}$ computable by a de Morgan formula of size $s$, we show that
\[
\sum_{A\subseteq [n]\; :\; |A|>s^{1/\Gamma + \epsilon}} \hat{f}(A)^2 \leq exp(-s^{\epsilon/3}),
\]
where $\Gamma$ is the shrinkage exponent for the corresponding class of formulas: $\Gamma=2$ for de Morgan formulas, and $\Gamma=1/\log_2(\sqrt{5}-1)\approx 3.27$ for read-once de Morgan formulas. We prove that this Fourier concentration is essentially optimal.
As an application, we get that subquadratic-size de Morgan formulas have negligible correlation with parity, and are learnable under the uniform distribution, and also lossily compressible, in subexponential time. Finally, we establish the tight $\Theta(s^{1/\Gamma})$ bound on
the average sensitivity of read-once formulas of size $s$; this mirrors the known tight bound $\Theta(\sqrt{s})$ on the
average sensitivity of general de Morgan formulas of size $s$.
updated references and related work; added some new proofs; corrected minor errors and typos
For Boolean functions computed by de Morgan formulas of subquadratic size or read-once de Morgan formulas, we prove a sharp concentration of the Fourier mass on ``small-degree'' coefficients. For a Boolean function $f:\{0,1\}^n\to\{1,-1\}$ computable by a de Morgan formula of size $s$, we show that
\[
\sum_{A\subseteq [n]\; :\; |A|>s^{1/\Gamma + \epsilon}} \hat{f}(A)^2 \leq exp(-s^{\epsilon/3}),
\]
where $\Gamma$ is the shrinkage exponent for the corresponding class of formulas: $\Gamma=2$ for de Morgan formulas, and $\Gamma=1/\log_2(\sqrt{5}-1)\approx 3.27$ for read-once de Morgan formulas. We prove that this Fourier concentration is essentially optimal.
As an application, we get that subquadratic-size de Morgan formulas have negligible correlation with parity, and are learnable under the uniform distribution, and also lossily compressible, in subexponential time. We also prove that the average sensitivity of a read-once function $f$ on $n$ variables is at most $n^{1/\Gamma +o(1)}$, and is at least $\Omega(n^{1/\Gamma})$.