Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #2 to TR17-098 | 28th February 2018 04:29

Understanding Deep Neural Networks with Rectified Linear Units

RSS-Feed




Revision #2
Authors: Raman Arora, Amitabh Basu , Poorya Mianjy, Anirbit Mukherjee
Accepted on: 28th February 2018 04:29
Downloads: 1372
Keywords: 


Abstract:

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm
to train a ReLU DNN with one hidden layer to *global optimality* with runtime polynomial in the data size albeit exponential in the input dimension.

Further, we improve on the known lower bounds on size (from exponential to super-exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parameterized families of ``hard''functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number $k$ there exists a function representable by a ReLU DNN with $k^2$ hidden layers and total size $k^3$, such that any ReLU DNN with at most $k$ hidden layers will require at least $\frac{1}{2}k^{k+1}-1$ total nodes.

Finally, for the family of $\mathbb{R}^n\to \mathbb{R}$ DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an
explicit construction of a *smoothly parameterized* family of functions attaining this scaling. Our construction utilizes the theory of Zonotopes from
polyhedral theory.



Changes to previous version:

This is the final version that was published at the ICLR 2018. The poly(data) exact training algorithm for any single hidden layer R^n-> R ReLU DNN now has a cleaner pseudocode for it given on page 8. Also now on page 7 there is a more precise description about when and how the Zonotope construction improves on the Theorem 4 of this paper, https://arxiv.org/abs/1402.1869


Revision #1 to TR17-098 | 17th June 2017 22:25

Understanding Deep Neural Networks with Rectified Linear Units





Revision #1
Authors: Raman Arora, Amitabh Basu , Poorya Mianjy, Anirbit Mukherjee
Accepted on: 17th June 2017 22:25
Downloads: 2814
Keywords: 


Abstract:

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train to global optimality a ReLU DNN with one hidden layer, assuming the input dimension and number of nodes of the network as fixed constants.

We also improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of ``hard'' functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number $k$ there exists a function representable by a ReLU DNN with $k^2$ hidden layers and total size $k^3$, such that any ReLU DNN with at most $k$ hidden layers will require at least $\frac{1}{2}k^{k+1}-1$ total nodes.

Finally, we construct a family of $\mathbb{R}^n\to \mathbb{R}$ piecewise linear functions for $n\geq 2$ (also smoothly parameterized), whose number of affine pieces scales exponentially with the dimension $n$ at any fixed size and depth. To the best of our knowledge, such a construction with exponential dependence on $n$ has not been achieved by previous families of ``hard'' functions in the neural nets literature. This construction utilizes the theory of zonotopes from polyhedral theory.


Paper:

TR17-098 | 28th May 2017 21:26

Understanding Deep Neural Networks with Rectified Linear Units


Abstract:

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train to global optimality a ReLU DNN with one hidden layer, assuming the input dimension and number of nodes of the network as fixed constants.

We also improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of ``hard'' functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number $k$ there exists a function representable by a ReLU DNN with $k^2$ hidden layers and total size $k^3$, such that any ReLU DNN with at most $k$ hidden layers will require at least $\frac{1}{2}k^{k+1}-1$ total nodes.

Finally, we construct a family of $\mathbb{R}^n\to \mathbb{R}$ piecewise linear functions for $n\geq 2$ (also smoothly parameterized), whose number of affine pieces scales exponentially with the dimension $n$ at any fixed size and depth. To the best of our knowledge, such a construction with exponential dependence on $n$ has not been achieved by previous families of ``hard'' functions in the neural nets literature. This construction utilizes the theory of zonotopes from polyhedral theory.



ISSN 1433-8092 | Imprint