All reports by Author Peter Auer:

__
TR00-072
| 14th July 2000
__

Peter Auer, Philip M. Long, Aravind Srinivasan#### Approximating Hyper-Rectangles: Learning and Pseudo-random Sets

__
TR00-071
| 14th July 2000
__

Peter Auer, Nicolo Cesa-Bianchi#### On-line Learning with Malicious Noise and the Closure Algorithm

__
TR00-070
| 14th July 2000
__

Peter Auer, Manfred K. Warmuth#### Tracking the best disjunction

__
TR00-069
| 14th July 2000
__

Peter Auer#### Learning Nested Differences in the Presence of Malicious Noise

__
TR00-068
| 13th July 2000
__

Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, Robert E. Schapire#### Gambling in a rigged casino: The adversarial multi-armed bandit problem

__
TR00-067
| 13th July 2000
__

Peter Auer, Philip M. Long#### Simulating Access to Hidden Information while Learning

__
TR00-066
| 14th July 2000
__

Peter Auer#### On Learning from Ambiguous Information

__
TR00-063
| 13th July 2000
__

Peter Auer#### On-line Learning of Rectangles in Noisy Environments

__
TR00-055
| 14th July 2000
__

Peter Auer, Stephen Kwek, Manfred K. Warmuth#### Learning of Depth Two Neural Networks with Constant Fan-in at the Hidden Nodes

__
TR00-050
| 13th July 2000
__

Peter Auer, Philip M. Long, Gerhard J. Woeginger#### On the Complexity of Function Learning

Peter Auer, Philip M. Long, Aravind Srinivasan

The PAC learning of rectangles has been studied because they have

been found experimentally to yield excellent hypotheses for several

applied learning problems. Also, pseudorandom sets for rectangles

have been actively studied recently because (i) they are a subproblem

common to the derandomization of depth-2 (DNF) ...
more >>>

Peter Auer, Nicolo Cesa-Bianchi

We investigate a variant of the on-line learning model for classes

of {0,1}-valued functions (concepts) in which the labels of a certain

amount of the input instances are corrupted by adversarial noise.

We propose an extension of a general learning strategy, known as

"Closure Algorithm", to this noise ...
more >>>

Peter Auer, Manfred K. Warmuth

Littlestone developed a simple deterministic on-line learning

algorithm for learning $k$-literal disjunctions. This algorithm

(called Winnow) keeps one weight for each variable and does

multiplicative updates to its weights. We develop a randomized

version of Winnow and prove bounds for an adaptation of the

algorithm ...
more >>>

Peter Auer

We present a PAC-learning algorithm and an on-line learning algorithm

for nested differences of intersection-closed classes. Examples of

intersection-closed classes include axis-parallel rectangles,

monomials, and linear sub-spaces. Our PAC-learning algorithm uses a

pruning technique that we rigorously proof correct. As a result we

show that ...
more >>>

Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, Robert E. Schapire

In the multi-armed bandit problem, a gambler must decide which arm

of K non-identical slot machines to play in a sequence of trials

so as to maximize his reward.

This classical problem has received much attention because of the

simple model it provides of the trade-off between

exploration ...
more >>>

Peter Auer, Philip M. Long

We introduce a new technique which enables a learner without access to

hidden information to learn nearly as well as a learner with access to

hidden information. We apply our technique to solve an open problem

of Maass and Turan, showing that for any concept class F the least ...
more >>>

Peter Auer

We investigate a variant of the Probably Almost Correct learning model

where the learner has to learn from ambiguous information. The

ambiguity is introduced by assuming that the learner does not receive

single instances with their correct labels as training data, but that

the learner receives ...
more >>>

Peter Auer

We investigate the implications of noise in the equivalence query

model. Besides some results for general target and hypotheses

classes, we prove bounds on the learning complexity of d-dimensional

discretized rectangles in the case where only rectangles are allowed

as hypotheses.

Our noise model assumes ...
more >>>

Peter Auer, Stephen Kwek, Manfred K. Warmuth

We present algorithms for learning depth two neural networks where the

hidden nodes are threshold gates with constant fan-in. The transfer

function of the output node might be more general: we have results for

the cases when the threshold function, the logistic function or the

identity function is ...
more >>>

Peter Auer, Philip M. Long, Gerhard J. Woeginger

The majority of results in computational learning theory

are concerned with concept learning, i.e. with the special

case of function learning for classes of functions

with range {0,1}. Much less is known about the theory of

learning functions with a larger range such

as N or R. In ...
more >>>