List-decoding and list recovery ask how much corruption or uncertainty a code can tolerate while still keeping the number of plausible codewords small. For large alphabet codes, the ultimate benchmark for list-decoding is the ($\epsilon$-relaxed) generalized Singleton bound, which targets list-of-$L$ decoding radius with rate $R$ up to radius $\frac{L}{L+1}(1-R-\epsilon)$. We prove improved alphabet-size bounds for random linear and additive (folded) codes in this regime. Specifically, we show that random $s$-folded codes over any finite field $\mathbb F_q$ with $s=\Omega(1/\epsilon)$ meet the $\epsilon$-relaxed generalized Singleton bound for all list sizes $L$, matching the optimal $\exp(\Theta(1/\epsilon))$ dependence on the alphabet size. For random linear codes, we show that alphabet size $\exp(O(\log L/\epsilon))$ suffices, improving the previous $\exp(O(L/\epsilon))$ bound. In the important regime of $L=\Theta(1/\epsilon)$, where one list-decodes up to radius $(1-R-\epsilon)$, this improves the alphabet size from $\exp(O(1/\epsilon^2))$ to $\exp(\widetilde O(1/\epsilon))$ for random linear codes.
For list recovery, we close the gap between the two best previous tradeoffs: prior work achieved either polynomial alphabet size in $\ell$ or near-optimal output list size, but not both simultaneously.
We show that random linear codes achieve near-optimal output list size $(\ell/(R+\epsilon))^{O(R/\epsilon+1)}$ over alphabet size $(\ell/(R+\epsilon))^{O((R+\epsilon)/\epsilon^2)}$, which is polynomial in $\ell$.
Our gains stem from isolating the right combinatorial tools to count constraints, and identifying canonical configurations avoiding which suffice for list-decoding or list-recovery. For list-decoding, we combine tools from weakly-partition-connected agreement hypergraphs with the partition structure implicit in recent subspace-design arguments to count only partition-induced local profiles, capturing the genuinely new linear constraints in a bad witness. For list recovery, we pair a reworked local coordinate-wise linear framework with discrete Brascamp--Lieb inequalities to quotient arbitrary bad configurations to minimal profiles. Together, our methods yield modular techniques and a general framework for improving the analysis of random linear codes across a broad range of settings, instantiated concretely here for list-decoding and list-recovery.
Additionally, our presentation is self-contained and fully develops and proves all necessary ingredients.