We investigate a variant of the on-line learning model for classes
of {0,1}-valued functions (concepts) in which the labels of a certain
amount of the input instances are corrupted by adversarial noise.
We propose an extension of a general learning strategy, known as
"Closure Algorithm", to this noise model, and show a worst-case
mistake bound for learning an arbitrary intersection-closed concept
class. For several concept classes our extended Closure Algorithm is
efficient and can tolerate a noise rate up to the information-theoretic
upper bound. Finally, we show how to efficiently turn any algorithm
for the on-line noise model into a learning algorithm for the PAC model
with malicious noise.