The PAC learning of rectangles has been studied because they have
been found experimentally to yield excellent hypotheses for several
applied learning problems. Also, pseudorandom sets for rectangles
have been actively studied recently because (i) they are a subproblem
common to the derandomization of depth-2 (DNF) ...
more >>>
We investigate a variant of the on-line learning model for classes
of {0,1}-valued functions (concepts) in which the labels of a certain
amount of the input instances are corrupted by adversarial noise.
We propose an extension of a general learning strategy, known as
"Closure Algorithm", to this noise ...
more >>>
Littlestone developed a simple deterministic on-line learning
algorithm for learning $k$-literal disjunctions. This algorithm
(called Winnow) keeps one weight for each variable and does
multiplicative updates to its weights. We develop a randomized
version of Winnow and prove bounds for an adaptation of the
algorithm ...
more >>>
We present a PAC-learning algorithm and an on-line learning algorithm
for nested differences of intersection-closed classes. Examples of
intersection-closed classes include axis-parallel rectangles,
monomials, and linear sub-spaces. Our PAC-learning algorithm uses a
pruning technique that we rigorously proof correct. As a result we
show that ...
more >>>
In the multi-armed bandit problem, a gambler must decide which arm
of K non-identical slot machines to play in a sequence of trials
so as to maximize his reward.
This classical problem has received much attention because of the
simple model it provides of the trade-off between
exploration ...
more >>>
We introduce a new technique which enables a learner without access to
hidden information to learn nearly as well as a learner with access to
hidden information. We apply our technique to solve an open problem
of Maass and Turan, showing that for any concept class F the least ...
more >>>
We investigate a variant of the Probably Almost Correct learning model
where the learner has to learn from ambiguous information. The
ambiguity is introduced by assuming that the learner does not receive
single instances with their correct labels as training data, but that
the learner receives ...
more >>>
We investigate the implications of noise in the equivalence query
model. Besides some results for general target and hypotheses
classes, we prove bounds on the learning complexity of d-dimensional
discretized rectangles in the case where only rectangles are allowed
as hypotheses.
Our noise model assumes ...
more >>>
We present algorithms for learning depth two neural networks where the
hidden nodes are threshold gates with constant fan-in. The transfer
function of the output node might be more general: we have results for
the cases when the threshold function, the logistic function or the
identity function is ...
more >>>
The majority of results in computational learning theory
are concerned with concept learning, i.e. with the special
case of function learning for classes of functions
with range {0,1}. Much less is known about the theory of
learning functions with a larger range such
as N or R. In ...
more >>>