Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > KEYWORD > NEURAL NETWORKS:
Reports tagged with Neural Networks:
TR94-024 | 12th December 1994
Marek Karpinski, Angus Macintyre

Polynomial Bounds for VC Dimension of Sigmoidal Neural Networks

We introduce a new method for proving explicit upper bounds on the VC
Dimension of general functional basis networks, and prove as an
application, for the first time, the VC Dimension of analog neural
networks with the sigmoid activation function $\sigma(y)=1/1+e^{-y}$
to ... more >>>


TR94-012 | 12th December 1994

Bounds for the Computational Power and Learning Complexity of Analog Neural Nets

It is shown that high order feedforward neural nets of constant depth with piecewise
polynomial activation functions and arbitrary real weights can be simulated for boolean
inputs and outputs by neural nets of a somewhat larger size and depth with heaviside
gates and weights ... more >>>


TR94-017 | 12th December 1994

Neural Nets with Superlinear VC-Dimension

It has been known for quite a while that the Vapnik-Chervonenkis dimension
(VC-dimension) of a feedforward neural net with linear threshold gates is at
most O(w log w), where w is the total number of weights in the neural net.
We show in this paper that this bound is in ... more >>>


TR95-055 | 24th November 1995
Marek Karpinski, Angus Macintyre

VC Dimension of Sigmoidal and General Pfaffian Networks

We introduce a new method for proving explicit upper bounds
on the VC Dimension of general functional basis networks,
and prove as an application, for the first time, that the
VC Dimension of analog neural networks with the sigmoidal
activation function $\sigma(y)=1/1+e^{-y}$ ... more >>>


TR96-025 | 22nd March 1996
Berthold Ruf

The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials

Recently one has started to investigate the
computational power of spiking neurons (also called ``integrate and
fire neurons''). These are neuron models that are substantially
more realistic from the biological point of view than the
ones which are traditionally employed in artificial neural nets.
It has turned out that the ... more >>>


TR97-049 | 22nd October 1997
Michael Schmitt

On the Complexity of Learning for Spiking Neurons with Temporal Coding

Spiking neurons are models for the computational units in
biological neural systems where information is considered to be encoded
mainly in the temporal pattern of their activity. In a network of
spiking neurons a new set of parameters becomes relevant which has no
counterpart in traditional ... more >>>


TR97-051 | 11th November 1997
Pekka Orponen

On the Effect of Analog Noise in Discrete-Time Analog Computations

We introduce a model for analog computation with discrete
time in the presence of analog noise
that is flexible enough to cover the most important concrete
cases, such as noisy analog neural nets and networks of spiking neurons.
This model subsumes the classical ... more >>>


TR97-052 | 11th November 1997
Eduardo D. Sontag

Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognize Arbitrary Regular Languages

We consider recurrent analog neural nets where the output of each
gate is subject to Gaussian noise, or any other common noise
distribution that is nonzero on a large set.
We show that many regular languages cannot be recognized by
networks of this type, and
more >>>


TR99-005 | 21st December 1998
Michael Schmitt

On the Sample Complexity for Nonoverlapping Neural Networks

A neural network is said to be nonoverlapping if there is at most one
edge outgoing from each node. We investigate the number of examples
that a learning algorithm needs when using nonoverlapping neural
networks as hypotheses. We derive bounds for this sample complexity
in terms of the Vapnik-Chervonenkis dimension. ... more >>>


TR00-030 | 31st May 2000

A Simple Model for Neural Computation with Firing Rates and Firing Correlations

A simple extension of standard neural network models is introduced that
provides a model for neural computations that involve both firing rates and
firing correlations. Such extension appears to be useful since it has been
shown that firing correlations play a significant computational role in
many biological neural systems. Standard ... more >>>


TR00-086 | 26th September 2000
Michael Schmitt

On the Complexity of Computing and Learning with Multiplicative Neural Networks

In a great variety of neuron models neural inputs are
combined using the summing operation. We introduce the concept of
multiplicative neural networks which contain units that multiply
their inputs instead of summing them and, thus, allow inputs to
interact nonlinearly. The class of multiplicative networks
comprises such widely known ... more >>>


TR01-071 | 24th October 2001
Robert Albin Legenstein

Neural Circuits for Pattern Recognition with Small Total Wire Length

One of the most basic pattern recognition problems is whether a
certain local feature occurs in some linear array to the left of
some other local feature. We construct in this article circuits that
solve this problem with an asymptotically optimal number of
threshold gates. Furthermore it is shown that ... more >>>


TR17-098 | 28th May 2017
Raman Arora, Amitabh Basu , Poorya Mianjy, Anirbit Mukherjee

Understanding Deep Neural Networks with Rectified Linear Units

Revisions: 2

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train to global optimality a ReLU DNN with one hidden layer, assuming the input dimension and number ... more >>>


TR17-190 | 6th November 2017
Anirbit Mukherjee, Amitabh Basu

Lower bounds over Boolean inputs for deep neural networks with ReLU gates.

Motivated by the resurgence of neural networks in being able to solve complex learning tasks we undertake a study of high depth networks using ReLU gates which implement the function $x \mapsto \max\{0,x\}$. We try to understand the role of depth in such neural networks by showing size lowerbounds against ... more >>>


TR20-085 | 5th June 2020
Gal Vardi, Ohad Shamir

Neural Networks with Small Weights and Depth-Separation Barriers

In studying the expressiveness of neural networks, an important question is whether there are functions which can only be approximated by sufficiently deep networks, assuming their size is bounded. However, for constant depths, existing results are limited to depths $2$ and $3$, and achieving results for higher depths has been ... more >>>


TR22-159 | 18th November 2022
Songhua He, Periklis Papakonstantinou

Deep Neural Networks: The Missing Complexity Parameter

Deep neural networks are the dominant machine learning model. We show that this model is missing a crucial complexity parameter. Today, the standard neural network (NN) model is a circuit whose gates (neurons) are ReLU units. The complexity of a NN is quantified by the depth (number of layers) and ... more >>>




ISSN 1433-8092 | Imprint