We provide a computational complexity lens to understand the power of machine learning models, particularly their ability to model complex systems.
Machine learning models are often trained on data drawn from sampleable or more complex distributions, a far wider range of distributions than just computable ones. By focusing on computable distributions, machine learning models can better manage complexity via probability. We abstract away from specific learning mechanisms, modeling machine learning as producing P/poly-computable distributions with polynomially-bounded max-entropy.
We illustrate how learning computable distributions models complexity by showing that if a machine learning model produces a distribution $\mu$ that minimizes error against the distribution generated by a cryptographic pseudorandom generator, then $\mu$ must be close to uniform.