Marek Karpinski, Angus Macintyre

We introduce a new method for proving explicit upper bounds on the VC

Dimension of general functional basis networks, and prove as an

application, for the first time, the VC Dimension of analog neural

networks with the sigmoid activation function $\sigma(y)=1/1+e^{-y}$

to ...
more >>>

Pascal Koiran

The main result of this paper is a Omega(n^{1/4}) lower bound

on the size of a sigmoidal circuit computing a specific AC^0_2 function.

This is the first lower bound for the computation model of sigmoidal

circuits with unbounded weights. We also give upper and lower bounds for

the ...
more >>>

Marek Karpinski, Angus Macintyre

We introduce a new method for proving explicit upper bounds

on the VC Dimension of general functional basis networks,

and prove as an application, for the first time, that the

VC Dimension of analog neural networks with the sigmoidal

activation function $\sigma(y)=1/1+e^{-y}$ ...
more >>>

Noga Alon, Shay Moran, Amir Yehudayoff

We study the maximum possible sign rank of $N \times N$ sign matrices with a given VC dimension $d$. For $d=1$, this maximum is $3$. For $d=2$, this maximum is $\tilde{\Theta}(N^{1/2})$. Similar (slightly less accurate) statements hold for $d>2$ as well. We discuss the tightness of our methods, and describe ... more >>>

Shay Moran, Amir Shpilka, Avi Wigderson, Amir Yehudayoff

In this work we study the quantitative relation between VC-dimension and two other basic parameters related to learning and teaching. We present relatively efficient constructions of {\em sample compression schemes} and

for classes of low VC-dimension. Let $C$ be a finite boolean concept class of VC-dimension $d$. Set $k ...
more >>>

Shay Moran, Amir Yehudayoff

We prove that proper PAC learnability implies compression. Namely, if a concept $C \subseteq \Sigma^X$ is properly PAC learnable with $d$ samples, then $C$ has a sample compression scheme of size $2^{O(d)}$.

In particular, every boolean concept class with constant VC dimension has a sample compression scheme of constant size. ...
more >>>

Shay Moran, Cyrus Rashtchian

We study complexity measures on subsets of the boolean hypercube and exhibit connections between algebra (the Hilbert function) and combinatorics (VC theory). These connections yield results in both directions. Our main complexity-theoretic result proves that most linear program feasibility problems cannot be computed by polynomial-sized constant-depth circuits. Moreover, our result ... more >>>

Stasys Jukna

We consider probabilistic circuits working over the real numbers, and using arbitrary semialgebraic functions of bounded description complexity as gates. We show that such circuits can be simulated by deterministic circuits with an only polynomial blowup in size. An algorithmic consequence is that randomization cannot substantially speed up dynamic programming. ... more >>>

Anup Bhattacharya, Sourav Chakraborty, Arijit Ghosh, Gopinath Mishra, Manaswi Paraashar

The disjointness problem - where Alice and Bob are given two subsets of $\{1, \dots, n\}$ and they have to check if their sets intersect - is a central problem in the world of communication complexity. While both deterministic and randomized communication complexities for this problem are known to be ... more >>>

Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, Amir Yehudayoff

A seminal result in learning theory characterizes the PAC learnability of binary classes through the Vapnik-Chervonenkis dimension. Extending this characterization to the general multiclass setting has been open since the pioneering works on multiclass PAC learning in the late 1980s. This work resolves this problem: we characterize multiclass PAC learnability ... more >>>