A new notion of predictive complexity and corresponding amount of
information are considered.
Predictive complexity is a generalization of Kolmogorov complexity
which bounds the ability of any algorithm to predict elements of
a sequence of outcomes. We consider predictive complexity for a wide class
of bounded loss functions which are generalization of square-loss function.
Relations between unconditional $KG(x)$ and conditional $KG(x|y)$
predictive complexities are studied. We define an algorithm
which has some ``expanding property''. It transforms with positive
probability sequences of given predictive
complexity into sequences of essentially bigger predictive complexity.
A concept of amount of predictive information $IG(y:x)$ is studied. We show
that this information is non-commutative in a very strong sense and present
asymptotic relations between values $IG(y:x)$, $IG(x:y)$, $KG(x)$ and
$KG(y)$.