Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > KEYWORD > MACHINE LEARNING:
Reports tagged with machine learning:
TR99-047 | 10th November 1999
Wolfgang Slany

Graph Ramsey games

We consider combinatorial avoidance and achievement games
based on graph Ramsey theory: The players take turns in coloring
still uncolored edges of a graph G, each player being assigned a
distinct color, choosing one edge per move. In avoidance games,
completing a monochromatic subgraph isomorphic to ... more >>>


TR07-088 | 7th September 2007
Elad Hazan, C. Seshadhri

Adaptive Algorithms for Online Decision Problems

Revisions: 1

We study the notion of learning in an oblivious changing environment. Existing online learning algorithms which minimize regret are shown to converge to the average of all locally optimal solutions. We propose a new performance metric, strengthening the standard metric of regret, to capture convergence to locally optimal solutions, and ... more >>>


TR08-053 | 27th March 2008
Stephen A. Fenner, William Gasarch, Brian Postow

The complexity of learning SUBSEQ(A)

Higman showed that if A is *any* language then SUBSEQ(A)
is regular, where SUBSEQ(A) is the language of all
subsequences of strings in A. (The result we attribute
to Higman is actually an easy consequence of his work.)
Let s_1, s_2, s_3, ... more >>>


TR21-132 | 11th September 2021
Eric Binnendyk

Pseudo-random functions and uniform learnability

Revisions: 1

Boolean circuits are a model of computation. A class of Boolean circuits is called a polynomial class if the number of nodes is bounded by a polynomial function of the number of input variables. A class $C_n[s(n)]$ of Boolean functions is called learnable if there are algorithms that can approximate ... more >>>


TR24-098 | 26th May 2024
Noga Amit, Orr Paradise, Guy Rothblum, shafi goldwasser

Models That Prove Their Own Correctness

Revisions: 2

How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured $on\ average$ over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train $Self$-$Proving\ models$ that prove ... more >>>




ISSN 1433-8092 | Imprint