** Abstract:**

This thesis is devoted to on-line learning. An on-line learning algorithm receives elements of a sequence one by one and tries to predict every element before it arrives. The performance of such an algorithm is measured by the discrepancies between its predictions and the outcomes. Discrepancies over several trials sum up to total cumulative loss.

The starting point is the Aggregating Algorithm (AA). This algorithm deals with the problem of prediction with expert advice. In this thesis the existing theory of the AA is surveyed and some further properties are established.

The concept of predictive complexity introduced by V.Vovk is a natural development of the theory of prediction with expert advice. Predictive complexity bounds the loss of every algorithm from below. Generally this bound does not correspond to the loss of an algorithm but it is optimal `in the limit'. Thus it is an intrinsic measure of `learnability' of a finite sequence. It is similar to Kolmogorov complexity, which is a measure of the descriptive complexity of a string independent of a particular description method. Different approaches to optimality give rise to different definitions of predictive complexity. The problem of determining the strongest complexity existing for a given loss function is addressed in this thesis. Some positive and negative results related to this problem are derived.

The expectations of predictive complexity w.r.t. the Bernoulli distribution provide a powerful tool for the theory of predictive complexity. The relations between these expectations and the loss functions are derived and the expectations are applied to establishing tight linear inequalities between different predictive complexities.

The final result of this thesis is the Unpredictability Property. It is a generalisation of the Incompressibility Property, which holds for Kolmogorov complexity. The Unpredictability Property states that the predictive complexity of most strings is close to a natural upper bound.

**Table of contents:**

- Introduction
- On-line Prediction

*2.1 The Basic Protocol*

*2.2 Games*

*2.3 Prediction with Expert Advice*

*2.4 Regularity Assumptions* - The Aggregating Algorithm

*3.1 The Operation of the Aggregating Algorithm*

*3.2 Proofs*

*3.3 Optimality of the Aggregating Algorithm* - The Constant $\beta$ and Mixability

*4.1 Geometric Images of Binary Games}*

*4.2 The Geometric Interpretation of $\beta$*

*4.3 Some Simple Properties of $\beta$*

*4.4 Differential Criteria of Mixability*

*4.5 Mixability of Specific Games*

*4.6 Non-mixable Games*

*4.7 Continuous Games* - Predictive Complexity

*5.1 Weaker Regularity Assumptions*

*5.2 Loss and Superloss Processes*

*5.3 Simple Predictive Complexity*

*5.4 Weaker Predictive Complexities*

*5.5 Complexities and Shifts*

*5.6 On the Existence of Weak Complexity* - Expectations of Predictive Complexity

*6.1 The Main Lemmas on Expectations*

*6.2 Expectations and the Legendre Transformation*

*6.3 The Uniqueness Theorem* - Linear Inequalities

*7.1 General Inequalities*

*7.2 Linear Inequalities between $Ksq$ and $Klog$* - Incompressibility and Unpredictability

*8.1 Conditional Complexity*

*8.2 The Unpredictability Property*

*8.3 Tightness of the Bound*

*8.4 An Alternative Derivation*