Abstract-Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically nonincreasing function of the number of hidden units. New training algorithms are developed where weights and thresholds from a well-trained smaller network are used to initialize a larger network. Methods are also developed to reduce the total amount of training required. It is shown that this technique yields an error curve that is a monotonic nonincreasing function of the number of hidden units and significantly reduces the training complexity. Additional results are presented based on using different probability distributions to generate the initial weights. I
Artificial neural networks have, in recent years, been very successfully applied in a wide range of ...
Several neural network architectures have been developed over the past several years. One of the mos...
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for opt...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
We present a training algorithm for multilayer perceptrons which relates to the technique of princip...
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to...
The standard multi-layer perceptron (MLP) training algorithm implicitly assumes that equal numbers o...
Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This p...
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., de...
A general method for building and training multilayer perceptrons composed of linear threshold units...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
A general method for building and training multilayer perceptrons composed of linear threshold units...
Artificial neural networks have, in recent years, been very successfully applied in a wide range of ...
Several neural network architectures have been developed over the past several years. One of the mos...
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for opt...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
We present a training algorithm for multilayer perceptrons which relates to the technique of princip...
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to...
The standard multi-layer perceptron (MLP) training algorithm implicitly assumes that equal numbers o...
Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This p...
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., de...
A general method for building and training multilayer perceptrons composed of linear threshold units...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
A general method for building and training multilayer perceptrons composed of linear threshold units...
Artificial neural networks have, in recent years, been very successfully applied in a wide range of ...
Several neural network architectures have been developed over the past several years. One of the mos...
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for opt...