This paper develops a Neural network (NN) using conjugate gradient (CG). The modification of this method is in defining the direction of linear search. The conjugate gradient method has several methods to determine the steep size such as the Fletcher-Reeves, Dixon, Polak-Ribere, Hestene Steifel, and Dai-Yuan methods by using discrete electrocardiogram data. Conjugate gradients are used to update learning rates on neural networks by using different steep sizes. While the gradient search direction is used to update the weight on the NN. The results show that using Polak-Ribere get an optimal error, but the direction of the weighting search on NN widens and causes epoch on NN training is getting longer. But Hestene Steifel, and Dai-Yua could n...
Abstract—Conjugate gradient methods constitute an excellent choice for efficiently training large ne...
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate i...
In this dissertation the problem of the training of feedforward artificial neural networks and its a...
This paper develops a Neural network (NN) using conjugate gradient (CG). The modification of this me...
The conjugate gradient optimization algorithm is combined with the modified back propagation algorit...
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented ...
The usefulness of artificial neural networks (ANN) trained with the momentum backpropagation (MBP) a...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
Conjugate gradient methods (CG) constitute excellent neural network training methods that are simpli...
Artificial Neural Networks (ANN) merupakan salah satu kajian dalam Artificial Intelligence (AI) yang...
This research develops the theory of NN (neural network) by using CG (conjugate gradient) to sp...
The APPLICATION of artificial neural networks (ANN) in the diagnosis of neuromuscular disorders base...
Training of artificial neural networks (ANN) is normally a time consuming task due to iteratively se...
The important feature of this work is the combination of minimizing a function with desirable proper...
Abstract—Conjugate gradient methods constitute an excellent choice for efficiently training large ne...
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate i...
In this dissertation the problem of the training of feedforward artificial neural networks and its a...
This paper develops a Neural network (NN) using conjugate gradient (CG). The modification of this me...
The conjugate gradient optimization algorithm is combined with the modified back propagation algorit...
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented ...
The usefulness of artificial neural networks (ANN) trained with the momentum backpropagation (MBP) a...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
Conjugate gradient methods (CG) constitute excellent neural network training methods that are simpli...
Artificial Neural Networks (ANN) merupakan salah satu kajian dalam Artificial Intelligence (AI) yang...
This research develops the theory of NN (neural network) by using CG (conjugate gradient) to sp...
The APPLICATION of artificial neural networks (ANN) in the diagnosis of neuromuscular disorders base...
Training of artificial neural networks (ANN) is normally a time consuming task due to iteratively se...
The important feature of this work is the combination of minimizing a function with desirable proper...
Abstract—Conjugate gradient methods constitute an excellent choice for efficiently training large ne...
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate i...
In this dissertation the problem of the training of feedforward artificial neural networks and its a...