The problems of artificial neural networks learning and their parallelisation are taken up in this article.The article shows comparison of the Levenberg-Marquardt's method (LMM) and its two modifications JWM (method with Jacobian matrices formed in each step) and BKM (Jacobian calculations only in the first step) for training artificial neural networks. These algorithms have the following properties: 1) simpler calculations; 2) they are partly parallelized. The experiments proved their efficiency. Experimental results demonstrate that neural network for training by them needs a similar number of epochs as the LMM and lesser time for training
Deep Neural Network training algorithms consumes long training time, especially when the number of h...
This paper makes two principal contributions. The first is that there appears to be no previous a de...
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is pro...
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks;...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
In the application of the standard Levenherg-Marquardt training process of neural networks, error os...
Long training times and non-ideal performance have been a big impediment in further continuing the u...
This paper presents a local modification of the Levenberg-Marquardt algorithm (LM). First, the mathe...
This paper introduces a novel parallel trajectory mechanism that combines Levenberg-Marquardt and Fo...
The big-data is an oil of this century. A high amount of computational power is required to get know...
The Levenberg Marquardt (LM) algorithm is one of the most effective algorithms in speeding up the co...
Long training times and non-ideal performance have been a big impediment in further continuing the u...
International audienceThis paper presents two parallel implementations of the Back-propagation algor...
The Levenberg-Marquardt (LM) learning algorithm is a popular algo-rithm for training neural networks...
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to g...
Deep Neural Network training algorithms consumes long training time, especially when the number of h...
This paper makes two principal contributions. The first is that there appears to be no previous a de...
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is pro...
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks;...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
In the application of the standard Levenherg-Marquardt training process of neural networks, error os...
Long training times and non-ideal performance have been a big impediment in further continuing the u...
This paper presents a local modification of the Levenberg-Marquardt algorithm (LM). First, the mathe...
This paper introduces a novel parallel trajectory mechanism that combines Levenberg-Marquardt and Fo...
The big-data is an oil of this century. A high amount of computational power is required to get know...
The Levenberg Marquardt (LM) algorithm is one of the most effective algorithms in speeding up the co...
Long training times and non-ideal performance have been a big impediment in further continuing the u...
International audienceThis paper presents two parallel implementations of the Back-propagation algor...
The Levenberg-Marquardt (LM) learning algorithm is a popular algo-rithm for training neural networks...
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to g...
Deep Neural Network training algorithms consumes long training time, especially when the number of h...
This paper makes two principal contributions. The first is that there appears to be no previous a de...
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is pro...