A probabilistic representation of a string $x \in \{0,1\}^n$ is given by the code of a randomized algorithm that outputs $x$ with high probability [Oliveira, ICALP 2019]. We employ probabilistic representations to establish the first unconditional Coding Theorem in time-bounded Kolmogorov complexity. More precisely, we show that if a distribution ensemble $\mathcal{D}_m$ can be uniformly sampled in time $T(m)$ and generates a string $x \in \{0,1\}^*$ with probability at least $\delta$, then $x$ admits a time-bounded probabilistic representation of complexity $O(\log(1/\delta) + \log (T) + \log(m))$. Under mild assumptions, a representation of this form can be computed from $x$ and the code of the sampler in time polynomial in $n = |x|$.
We derive consequences of this result relevant to the study of data compression, pseudodeterministic algorithms, time hierarchies for sampling distributions, and complexity lower bounds. In particular, we describe an instance-based search-to-decision reduction for Levin's Kt complexity [Levin, Information and Control 1984] and its probabilistic analogue rKt [Oliveira, ICALP 2019]. As a consequence, if a string $x$ admits a succinct time-bounded representation, then a near-optimal representation can be generated from $x$ with high probability in polynomial time. This partially addresses in a time-bounded setting a question from [Levin, Information and Control 1984] on the efficiency of computing an optimal encoding of a string.