A long standing open problem in the theory of neural networks is the development of quantitative methods to estimate and compare the capabilities of different architectures. Here we define the capacity of an architecture by the binary logarithm of the number of functions it can compute, as the synaptic weights are varied. The capacity provides an upperbound on the number of bits that can be extracted from the training data and stored in the architecture during learning. We study the capacity of layered, fully-connected, architectures of linear threshold neurons with L layers of size n1,n2,…,nL and show that in essence the capacity is given by a cubic polynomial in the layer sizes: C(n1,…,nL)=∑k=1L-1min(n1,…,nk)nknk+1, where layers that are ...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
How does the size of a neural circuit influence its learning performance? Larger brains tend to be f...
A long standing open problem in the theory of neural networks is the development of quantitative met...
The capacity C_b of two later (N - 2L - 1) feed-forward neural networks is shown to satisfy the rela...
We study the excess capacity of deep networks in the context of supervised classification. That is, ...
We propose to measure the memory capacity of a state machine by the numbers of discernible states, w...
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
This paper proposes a new neural network architecture by introducing an additional dimension called ...
AbstractWe formalize a notion of loading information into connectionist networks that characterizes ...
The neural network is a powerful computing framework that has been exploited by biological evolution...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
Determining the memory capacity of two layer neural networks with $m$ hidden neurons and input dimen...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
How does the size of a neural circuit influence its learning performance? Larger brains tend to be f...
A long standing open problem in the theory of neural networks is the development of quantitative met...
The capacity C_b of two later (N - 2L - 1) feed-forward neural networks is shown to satisfy the rela...
We study the excess capacity of deep networks in the context of supervised classification. That is, ...
We propose to measure the memory capacity of a state machine by the numbers of discernible states, w...
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
This paper proposes a new neural network architecture by introducing an additional dimension called ...
AbstractWe formalize a notion of loading information into connectionist networks that characterizes ...
The neural network is a powerful computing framework that has been exploited by biological evolution...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
Determining the memory capacity of two layer neural networks with $m$ hidden neurons and input dimen...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
How does the size of a neural circuit influence its learning performance? Larger brains tend to be f...