Given algorithms $A_1,A_2$ running in logspace and linear time, there are two basic ways to compute the composition $x\rightarrow A_2(A_1(x))$. Applying naive composition gives an algorithm in linear time but linear space, while applying emulative composition (i.e. the composition of space-bounded algorithms) gives an algorithm in logarithmic space but quadratic time. Such time overheads are extremely common while designing low-space algorithms.
A natural question is whether one can enjoy the best of both worlds: for every $A_1,A_2$, is there a routine to compute $A_2\circ A_1$ in linear time and in small space? We prove that:
1: For linear time algorithms, this is unconditionally impossible -- any space-efficient composition must be polynomially slower than the naive algorithm.
2: Extending the unconditional lower bound in either one of many different ways (e.g., to algorithms running in large polynomial time, or to a lower bound for $k$-fold composition that increases as $n^{\omega_k(1)}$) would imply breakthrough algorithms for a major question in complexity theory, namely derandomization of probabilistic logspace.
The main conceptual contribution in our work is the connection between three questions: the complexity of composition, time-space tradeoffs for deterministic algorithms, and derandomization of probabilistic logspace. This connection gives additional motivation to study time-space tradeoffs, even under complexity-theoretic assumptions, and the motivation is strengthened by the fact that the tradeoffs required in our results are very relaxed (i.e., super-linear tradeoffs for deterministic algorithms).
To prove our results we develop the technical toolkit in an ongoing line of works studying pseudorandomness for low-space algorithms, and as a by-product, we improve several very recent results in this line of work. For example, we show a new ``win-win pair of low-space algorithms'', extending the construction of (Doron, Pyne, Tell, Williams, STOC 2025) to more general functions and eliminating a previously large polynomial runtime overhead; we reduce the task of derandomizing low-space algorithms to the seemingly tractable task of proving lower bounds for uniform Read-Once Branching Programs; and we introduce two highly efficient targeted pseudorandom generators, which use optimized versions of previous generators as building-blocks.