We study networks of spiking neurons that use the timing of pulses
to encode information. Nonlinear interactions model the spatial
groupings of synapses on the dendrites and describe the computations
performed at local branches. We analyze the question of how many
examples these networks must receive during learning to be able to
generalize well. Bounds for this sample complexity of learning are
derived in terms of the pseudo-dimension. In particular, we obtain
almost linear and quadratic upper bounds in terms of the number of
adjustable parameters for depth-restricted and general feedforward
architectures, respectively. These bounds are also shown to be
asymptotically tight for networks that satisfy realistic
constraints.