We prove a version of the derandomized Direct Product Lemma for
deterministic space-bounded algorithms. Suppose a Boolean function
$g:\{0,1\}^n\to\{0,1\}$ cannot be computed on more than $1-\delta$
fraction of inputs by any deterministic time $T$ and space $S$
algorithm, where $\delta\leq 1/t$ for some $t$. Then, for $t$-step
walks $w=(v_1,\dots, v_t)$ in some explicit $d$-regular expander graph
on $2^n$ vertices, the function
$g'(w)\stackrel{\mathrm{def}}{=}g(v_1)\dots g(v_t)$ cannot be computed
on more than $1-\Omega(t\delta)$ fraction of inputs by any
deterministic time $\approx T/d^t-\poly(n)$ and space $\approx
S-O(t)$. As an application, by iterating this construction, we get a
deterministic linear-space ``worst-case to constant average-case''
hardness amplification reduction, as well as a family of logspace
encodable/decodable error-correcting codes that can correct up to a
constant fraction of errors. Logspace encodable/decodable codes (with
linear-time encoding and decoding) were previously constructed by
Spielman~\cite{Spi96}. Our codes have weaker parameters (encoding
length is polynomial, rather than linear), but have a conceptually
simpler construction. The proof of our Direct Product Lemma is
inspired by Dinur's remarkable recent proof of the PCP theorem by gap
amplification using expanders~\cite{Din05}.