A natural model of a source of randomness consists of a long stream of symbols $X = X_1\circ\ldots\circ X_t$, with some guarantee on the entropy of $X_i$ conditioned on the outcome of the prefix $x_1,\dots,x_{i-1}$. We study unpredictable sources, a generalization of the almost Chor--Goldreich (CG) sources considered in [DMOZ23]. In an unpredictable source $X$, for a typical draw of $x\sim X$, for most $i$, $x_i$ has a low probability of occurring given $x_1,\dots,x_{i-1}$. Such a model relaxes the unrealistic assumption of a CG source that for every $i$, and every $x_1,\dots,x_{i-1}$, the next symbol $X_i$ has sufficiently large entropy.
Unpredictable sources subsume all previously considered notions of almost CG sources, including those for which it was unknown whether random walks using $X$ mix, and including those that are equivalent to general sources with high min entropy.
We prove that random walks using unpredictable sources do mix, and as a result obtain seeded online condensers with constant entropy gap, and (seedless) deterministic condensers outputting a constant fraction of the entropy. In particular, our condensers run in space comparable to the total entropy of the stream $X$, even when its length is not known ahead of time. As another corollary, we obtain a new extractor based on expander random walks handling lower entropy than the classic expander based construction relying on spectral techniques [Gil98].
As our main technical tool, we provide a novel analysis covering a key case of adversarial random walks on lossless expanders that [DMOZ23] fails to address. As part of the analysis, we provide a ``chain rule for vertex probabilities''. The standard chain rule states that for every $x \sim X$ and $i$, $\Pr(x_1,\dots,x_i)= \Pr[X_i = x_i|X_{[1,i-1]}=x_1,\dots,x_{i-1}] \cdot \Pr(x_1,\dots,x_{i-1})$. If $W(x_1,\dots,x_i)$ is the vertex reached using $x_1,\dots,x_i$, then the chain rule for vertex probabilities essentially states that the same phenomena occurs for a typical $x$:
\[\Pr [V_i = W(x_1,\dots,x_i)] \leq \Pr[X_i = x_i|X_{[1,i-1]}=x_1,\dots,x_{i-1}] \cdot \Pr[V_{i-1} = W(x_1,\dots,x_{i-1})],\]
where $V_i$ is the vertex distribution of the random walk at step $i$ using $X$.