A new algorithm for learning one-variable pattern languages from positive data
is proposed and analyzed with respect to its average-case behavior.
We consider the total learning time that takes into account all
operations till convergence to a correct hypothesis is achieved.
For almost all meaningful distributions
defining how the pattern variable is replaced by a string to generate
random examples of the target pattern language,
it is shown that this algorithm converges within an expected
constant number of rounds and a total learning time that is linear in the
pattern length. Thus, our solution is average-case optimal in a strong sense.
Though one-variable pattern languages can neither be finitely inferred
from positive data nor PAC-learned,
our approach can also be extended to a probabilistic finite learner
that it exactly infers all one-variable pattern languages from positive
data with high confidence.
It is a long standing open problem whether pattern languages
can be learned in case that substitutions of pattern variables by
the empty string can also occur.
Our learning strategy can be generalized to this situation as well.
Finally, we show some experimental results for the behaviour
of this new learning algorithm in pratice.