Given an efficient algorithm that correctly computes a tiny fraction of the entries of the matrix multiplication of a small fraction of two matrices, can one design an efficient algorithm that computes matrix multiplication exactly for all the matrices? In this paper, we present such ``worst-case exact to average-case approximate'' reductions that transform any algorithm that correctly computes a tiny fraction of the entries of the multiplication of two uniformly random matrices over a finite field into a randomized worst-case algorithm that computes matrix multiplication for all the matrices. Under non-uniform reductions, we present an optimal reduction that error-corrects an algorithm whose output has expected Hamming distance $1 - \frac{1}{p} - \varepsilon$ to the multiplication of two random matrices over a finite field of size $p$ for any positive constant $\varepsilon > 0$. Under uniform reductions, we present efficient reductions that correct a $(1 - \varepsilon)$-fraction of errors over a field of size $p$ for all $\varepsilon > 0$ and for all sufficiently large $p$. We also present an optimal uniform reduction for the Online Matrix-Vector Multiplication problem.
The non-uniform reduction is based on a new and simple proof of Yao's XOR lemma for multi-output functions, whose complexity overhead is independent of the length of the output.