This paper initiates the study of deterministic amplification of space
bounded probabilistic algorithms. The straightforward implementations of
known amplification methods cannot be used for such algorithms, since they
consume too much space. We present a new implementation of the
Ajtai-Koml\'{o}s-Szemer\'{e}di method, that enables to amplify an $S$
space algorithm
that uses $r$ random bits and errs with probability $\epsilon$ to an $O(kS)$
space algorithm that uses $r + O(k)$ random bits and errs with probability
$\epsilon^{\Omega(k)}$.
This method can be used to reduce the error probability of $BPL$ algorithms
below any constant, with only a constant addition of new random bits.
This is weaker than the exponential reduction that can be achieved for $BPP$
algorithms by methods that use only $O(r)$ random bits. However, we prove that
any black box amplification method that uses $O(r)$ random bits and makes at
most $p$ parallel simulations reduces the error to at most $\epsilon^{O(p)}$.
Hence, in $BPL$, where $p$ should be a constant, the error cannot be reduced to
less than a constant. This means that our method is optimal with respect to
black box amplification methods, that use $O(r)$ random bits.
The new implementation is based on explicit constructions of constant
space {\em online extractors} and {\em online expanders}. These are extractors
and expanders, for which neighborhoods can be computed in a constant space by
a Turing machine with a one-way input tape.