This paper presents a constructive approach to estimating the size of a neural network necessary to solve a given classification problem. The results are derived using an information entropy approach in the context of limited precision integer weights. Such weights are particularly suited for hardware implementations since the area they occupy is limited, and the computations performed with them can be efficiently implemented in hardware. The considerations presented use an information entropy perspective and calculate lower bounds on the number of bits needed in order to solve a given classification problem. These bounds are obtained by approximating the classification hypervolumes with the volumes of several regular (i.e., highly symmetri...
This dissertation considers the subject of information losses arising from finite datasets used in t...
The paper overviews results dealing with the approximation capabilities of neural networks, as well ...
We develop a. new feedforward neuralnet.work represent.ation of Lipschitz functions from [0, p]n int...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
In this paper the authors prove two new lower bounds for the number-of-bits required by neural netwo...
This paper relies on the entropy of a data-set (i.e., number-of-bits) to prove tight bounds on the s...
This paper uses two different approaches to show that VLSI- and size-optimal discrete neural network...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...
The paper overviews results dealing with the approximation capabilities of neural networks, and boun...
Because VLSI implementations do not cope well with highly interconnected nets the area of a chip gro...
The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for im...
If neural networks are to be used on a large scale, they have to be implemented in hardware. However...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
The constructive bounds on the needed number-of-bits (entropy) for solving a dichotomy (i.e., classi...
We formulate the entropy of a quantized artificial neural network as a differentiable function that ...
This dissertation considers the subject of information losses arising from finite datasets used in t...
The paper overviews results dealing with the approximation capabilities of neural networks, as well ...
We develop a. new feedforward neuralnet.work represent.ation of Lipschitz functions from [0, p]n int...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
In this paper the authors prove two new lower bounds for the number-of-bits required by neural netwo...
This paper relies on the entropy of a data-set (i.e., number-of-bits) to prove tight bounds on the s...
This paper uses two different approaches to show that VLSI- and size-optimal discrete neural network...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...
The paper overviews results dealing with the approximation capabilities of neural networks, and boun...
Because VLSI implementations do not cope well with highly interconnected nets the area of a chip gro...
The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for im...
If neural networks are to be used on a large scale, they have to be implemented in hardware. However...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
The constructive bounds on the needed number-of-bits (entropy) for solving a dichotomy (i.e., classi...
We formulate the entropy of a quantized artificial neural network as a differentiable function that ...
This dissertation considers the subject of information losses arising from finite datasets used in t...
The paper overviews results dealing with the approximation capabilities of neural networks, as well ...
We develop a. new feedforward neuralnet.work represent.ation of Lipschitz functions from [0, p]n int...