Recently one has started to investigate the
computational power of spiking neurons (also called ``integrate and
fire neurons''). These are neuron models that are substantially
more realistic from the biological point of view than the
ones which are traditionally employed in artificial neural nets.
It has turned out that the computational power of networks of
spiking neurons is quite large.
In particular they have the ability to communicate and manipulate
analog variables in spatio-temporal coding, i.e.~encoded in the
time points when specific neurons ``fire'' (and thus send a ``spike''
to other neurons).
These preceding results have motivated the question which details of the
firing mechanism of spiking neurons are essential for their computational
power, and which details are ``accidental'' aspects of their realization in
biological ``wetware''. Obviously this question becomes important if one
wants to capture some of the advantages of computing and learning with
spatio-temporal coding in a new generation of artificial neural nets, such
as for example pulse stream VLSI.
The firing mechanism of spiking neurons is defined in terms of their
postsynaptic potentials or ``response functions'', which describe the
change in their electric membrane potential as a result of the firing of
another neuron. We consider in this article the case where the response
functions of spiking neurons are assumed to be of the mathematically most
elementary type: they are assumed to be step-functions (i.e. piecewise
constant functions). This happens to be the functional form which has so
far been adapted most frequently in pulse stream VLSI as the form of
potential changes (``pulses'') that mimic the role of postsynaptic
potentials in biological neural systems. We prove the rather surprising
result that in models without noise the computational power of networks of
spiking neurons with arbitrary piecewise constant response functions is
strictly weaker than that of networks where the response functions of
neurons also contain short segments where they increase respectively
decrease in a linear fashion (which is in fact biologically more
realistic). More precisely we show for example that an addition of analog
numbers is impossible for a network of spiking neurons with piecewise
constant response functions (with any bounded number of computation steps,
i.e. spikes), whereas addition of analog numbers is easy if the response
functions have linearly increasing segments.