If neural networks are to be used on a large scale, they have to be implemented in hardware. However, the cost of the hardware implementation is critically sensitive to factors like the precision used for the weights, the total number of bits of information and the maximum fan-in used in the network. This paper presents a version of the Constraint Based Decomposition training algorithm which is able to produce networks using limited precision integer weights and units with limited fan-in. The algorithm is tested on the 2-spiral problem and the results are compared with other existing algorithms
A critical question in the neural network research today concerns how many hidden neurons to use. Th...
Abstract—Competitive majority network trained by error cor-rection (C-Mantec), a recently proposed c...
Combinatorial optimization problems compose an important class of matliematical problems that includ...
Because VLSI implementations do not cope well with highly interconnected nets the area of a chip gro...
In this work, a new approach for training artificial neural networks is presented which utilises tec...
This paper presents a constructive approach to estimating the size of a neural network necessary to ...
The application of programmable devices to implement neural networks requires efficient training alg...
In neural networks, simultaneous determination of the optimum structure and weights is a challenge. ...
This paper uses two different approaches to show that VLSI- and size-optimal discrete neural network...
Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a...
A new family of neural network architectures is presented. This family of architectures solves the p...
Many constructive learning algorithms have been proposed to find an appropriate network structure fo...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (F...
A critical question in the neural network research today concerns how many hidden neurons to use. Th...
Abstract—Competitive majority network trained by error cor-rection (C-Mantec), a recently proposed c...
Combinatorial optimization problems compose an important class of matliematical problems that includ...
Because VLSI implementations do not cope well with highly interconnected nets the area of a chip gro...
In this work, a new approach for training artificial neural networks is presented which utilises tec...
This paper presents a constructive approach to estimating the size of a neural network necessary to ...
The application of programmable devices to implement neural networks requires efficient training alg...
In neural networks, simultaneous determination of the optimum structure and weights is a challenge. ...
This paper uses two different approaches to show that VLSI- and size-optimal discrete neural network...
Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a...
A new family of neural network architectures is presented. This family of architectures solves the p...
Many constructive learning algorithms have been proposed to find an appropriate network structure fo...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (F...
A critical question in the neural network research today concerns how many hidden neurons to use. Th...
Abstract—Competitive majority network trained by error cor-rection (C-Mantec), a recently proposed c...
Combinatorial optimization problems compose an important class of matliematical problems that includ...