Let \mathcal{L} be a language that can be decided in linear space and let \epsilon >0 be any constant. Let \mathcal{A} be the exponential hardness assumption that for every n, membership in \mathcal{L} for inputs of length~n cannot be decided by circuits of size smaller than 2^{\epsilon n}.
We prove that for every function f :\{0,1\}^* \rightarrow \{0,1\}, computable by a randomized logspace algorithm R, there exists a deterministic logspace algorithm D (attempting to compute f), such that on every input x of length n, the algorithm D outputs one of the following:
1: The correct value f(x).
2: The string: ``I am unable to compute f(x) because the hardness assumption \mathcal{A} is false'', followed by a (provenly correct) circuit of size smaller than 2^{\epsilon n'} for membership in \mathcal{L} for inputs of length~n', for some n' = \Theta (\log n); that is, a circuit that refutes \mathcal{A}.
Moreover, D is explicitly constructed, given R.
We note that previous works on the hardness-versus-randomness paradigm give derandomized algorithms that rely blindly on the hardness assumption. If the hardness assumption is false, the algorithms may output incorrect values, and thus a user cannot trust that an output given by the algorithm is correct. Instead, our algorithm D verifies the computation so that it never outputs an incorrect value. Thus, if D outputs a value for f(x), that value is certified to be correct. Moreover, if D does not output a value for f(x), it alerts that the hardness assumption was found to be false, and refutes the assumption.
Our next result is a universal derandomizer for BPL (the class of problems solvable by bounded-error randomized logspace algorithms): We give a deterministic algorithm U that takes as an input a randomized logspace algorithm R and an input x and simulates the computation of R on x, deteriministically. Under the widely believed assumption BPL=L, the space used by U is at most C_R \cdot \log n (where C_R is a constant depending on~R). Moreover, for every constant c \geq 1, if BPL\subseteq SPACE[(\log(n))^{c}] then the space used by U is at most C_R \cdot (\log(n))^{c}.
Finally, we prove that if optimal hitting sets for ordered branching programs exist then there is a deterministic logspace algorithm that, given a black-box access to an ordered branching program B of size n, estimates the probability that B accepts on a uniformly random input. This extends the result of (Cheng and Hoza CCC 2020), who proved that an optimal hitting set implies a white-box two-sided derandomization.