In a variety of PAC learning models, a tradeoff between time and
information seems to exist: with unlimited time, a small amount of
information suffices, but with time restrictions, more information
sometimes seems to be required.
In addition, it has long been known that there are
concept classes that can be learned in the absence of computational
restrictions, but (under standard cryptographic assumptions) cannot be
learned in polynomial time regardless of sample size. Yet,
these results do not answer the question of whether there are classes
for which learning from a small set of examples is infeasible, but
becomes feasible when the learner has access to (polynomially) more
examples.
To address this question, we introduce a new measure of learning
complexity called computational sample complexity which
represents the number of examples sufficient for polynomial time
learning with respect to a fixed distribution. We then show
concept classes that (under similar cryptographic assumptions) possess
arbitrary sized gaps between their standard (information-theoretic)
sample complexity and their computational sample complexity. We also
demonstrate such gaps for learning from membership queries and
learning from noisy examples.