Starting with two hidden units, we train a simple single hidden layer feed-forward neural network to solve the n-bit parity problem. If the network fails to recognize correctly all the input patterns, an additional hidden unit is added to the hidden layer and the network is retrained. This process is repeated until a network that correctly classifies all the input patterns has been constructed. Using a variant of the quasi-Newton methods for training, we have been able to find networks with a single layer containing less than n hidden units that solve the n-bit parity problem for some value of n. This proves the power of combining quasi-Newton method and node incremental approach.link_to_subscribed_fulltex
In this work, we study how the selection of examples affects the learning procedure in a neural netw...
A fast parsimonious linear-programming-based algorithm for training neural networks is proposed that...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
Proceedings of the International Joint Conference on Neural Networks1305-30885OF
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
An algorithm for the training of multilayered feedforward neural networks is presented. The strategy...
An algorithm for the training of a special multilayered feed-forward neural network is presented. Th...
Abstract:- Highly nonlinear data sets are important in the field of artificial neural networks. It i...
www.elsevier.com/locate/neucom N-bit parity neural networks:new solutions based on linear programmin
[[abstract]]This paper addresses saturation phenomena at hidden nodes during the learning phase of n...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
In this paper ordered neural networks for the Nbit parity function containing [log2(N + 1)] threshol...
Abstract. A universal binary neuron (UBN) operates with the complex-valued weights and the complex-v...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...
A universal binary neuron (UBN) operates with complex-valued weights and a complex-valued activation...
In this work, we study how the selection of examples affects the learning procedure in a neural netw...
A fast parsimonious linear-programming-based algorithm for training neural networks is proposed that...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
Proceedings of the International Joint Conference on Neural Networks1305-30885OF
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
An algorithm for the training of multilayered feedforward neural networks is presented. The strategy...
An algorithm for the training of a special multilayered feed-forward neural network is presented. Th...
Abstract:- Highly nonlinear data sets are important in the field of artificial neural networks. It i...
www.elsevier.com/locate/neucom N-bit parity neural networks:new solutions based on linear programmin
[[abstract]]This paper addresses saturation phenomena at hidden nodes during the learning phase of n...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
In this paper ordered neural networks for the Nbit parity function containing [log2(N + 1)] threshol...
Abstract. A universal binary neuron (UBN) operates with the complex-valued weights and the complex-v...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...
A universal binary neuron (UBN) operates with complex-valued weights and a complex-valued activation...
In this work, we study how the selection of examples affects the learning procedure in a neural netw...
A fast parsimonious linear-programming-based algorithm for training neural networks is proposed that...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...