An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented on 1, 2, 4, 8, and 16 processors on an Int...
Nombre de pages: 7International audienceThe solving of least square systems is a useful operation in...
Abstract. The least mean squares (LMS) method for linear least squares problems differs from the ste...
This paper presents a new approach to learning in recurrent neural networks, based on the descent of...
Least squares solutions are a very important problem, which appear in a broad range of disciplines (...
Recurrent neural networks have the potential to perform significantly better than the commonly used ...
Two concurrent implementations of the method of conjugate gradients for training Elman networks are ...
In this paper we describe an on-line method of training neural networks which is based on solving th...
Feedforward neural networks are massively parallel computing structures that have the capability of ...
The multilayer perceptron is one of the most commonly used types of feedforward neural networks and ...
In this paper the parallelization of a new learning algorithm for multilayer perceptrons, specifical...
In this work a novel approach to the training of recurrent neural nets is presented. the algorithm e...
Classical methods for training feedforward neural networks are characterized by a number of shortcom...
In this work a novel approach to the training of recurrent neural nets is presented. The algorithm e...
The problems of artificial neural networks learning and their parallelisation are taken up in this a...
In this paper a new approach to learning in recurrent neural networks is presented. The method propo...
Nombre de pages: 7International audienceThe solving of least square systems is a useful operation in...
Abstract. The least mean squares (LMS) method for linear least squares problems differs from the ste...
This paper presents a new approach to learning in recurrent neural networks, based on the descent of...
Least squares solutions are a very important problem, which appear in a broad range of disciplines (...
Recurrent neural networks have the potential to perform significantly better than the commonly used ...
Two concurrent implementations of the method of conjugate gradients for training Elman networks are ...
In this paper we describe an on-line method of training neural networks which is based on solving th...
Feedforward neural networks are massively parallel computing structures that have the capability of ...
The multilayer perceptron is one of the most commonly used types of feedforward neural networks and ...
In this paper the parallelization of a new learning algorithm for multilayer perceptrons, specifical...
In this work a novel approach to the training of recurrent neural nets is presented. the algorithm e...
Classical methods for training feedforward neural networks are characterized by a number of shortcom...
In this work a novel approach to the training of recurrent neural nets is presented. The algorithm e...
The problems of artificial neural networks learning and their parallelisation are taken up in this a...
In this paper a new approach to learning in recurrent neural networks is presented. The method propo...
Nombre de pages: 7International audienceThe solving of least square systems is a useful operation in...
Abstract. The least mean squares (LMS) method for linear least squares problems differs from the ste...
This paper presents a new approach to learning in recurrent neural networks, based on the descent of...