The Nelder--Mead simplex algorithm (J. A. Nelder and R. Meade, Computer Journal, vol 7, pages 308-- 313, 1965) for function minimization is explored as a method for fitting multi--layer perceptron models without derivatives. While it is not as successful in some examples as the most sophisticated methods from the numerical analysis literature, for a method that uses neither derivatives nor a line search algorithm, it is surprisingly effective. Some instances where it might be the method of choice are indicated. 1 Introduction Learning in a neural network consists in minimizing the discrepancies between the internal representation and the external assignation. To implement this in a neural network, we impose a penalty function, ae = ae(out...
: In this paper we consider a possible improvment of conjugate gradient methods commonly used for tr...
The solution of nonparametric regression problems is addressed via polynomial approximators and one-...
Backpropagation (BP) is one of the most widely used algorithms for training feed-forward neural netw...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
In this contribution we present a method for constraining the learning of a Multi-Layer Perceptron n...
When used for function approximation purposes, neural networks belong to a class of models whose par...
In this paper the authors describe some useful strategies for nonconvex optimisation in order to det...
A multilayer perceptron is a feed forward artificial neural network model that maps sets of input da...
Various researchers have used one hidden layer neural networks (weighted sums of sigmoids) to find t...
Motivated by the problem of training multilayer perceptrons in neural networks, we con-sider the pro...
In this work an extended class of multilayer perceptron is presented. This includes independent para...
Gradient descent and instantaneous gradient descent learning rules are popular methods for training ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
The focus of this paper is on the neural network modelling approach that has gained increasing recog...
This paper proposes a new method to reduce training time for neural nets used as function approximat...
: In this paper we consider a possible improvment of conjugate gradient methods commonly used for tr...
The solution of nonparametric regression problems is addressed via polynomial approximators and one-...
Backpropagation (BP) is one of the most widely used algorithms for training feed-forward neural netw...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
In this contribution we present a method for constraining the learning of a Multi-Layer Perceptron n...
When used for function approximation purposes, neural networks belong to a class of models whose par...
In this paper the authors describe some useful strategies for nonconvex optimisation in order to det...
A multilayer perceptron is a feed forward artificial neural network model that maps sets of input da...
Various researchers have used one hidden layer neural networks (weighted sums of sigmoids) to find t...
Motivated by the problem of training multilayer perceptrons in neural networks, we con-sider the pro...
In this work an extended class of multilayer perceptron is presented. This includes independent para...
Gradient descent and instantaneous gradient descent learning rules are popular methods for training ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
The focus of this paper is on the neural network modelling approach that has gained increasing recog...
This paper proposes a new method to reduce training time for neural nets used as function approximat...
: In this paper we consider a possible improvment of conjugate gradient methods commonly used for tr...
The solution of nonparametric regression problems is addressed via polynomial approximators and one-...
Backpropagation (BP) is one of the most widely used algorithms for training feed-forward neural netw...