An improved compound gradient vector based a fast convergent NN online training weight update scheme is proposed in this paper. The convergent analysis indicates that because the compound gradient vector is employed during the weight update, the convergent speed of the presented algorithm is faster than the back propagation (BP) algorithm. In this scheme an adaptive learning factor is introduced in which the global convergence is obtained, and the convergence procedure on plateau and flat bottom area can speed up. Some simulations have been conducted and the results demonstrate the satisfactory convergent performance and strong robustness are obtained using the improved compound gradient vector NN online learning scheme for real time contro...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
WOS: 000348408100004This paper presents a novel weight updating algorithm for training of multilayer...
Abstract—Conjugate gradient methods constitute an excellent choice for efficiently training large ne...
Abstract. A survey is presented on some recent developments on the convergence of online gradient me...
Nowadays, in the field of information processing, neural networks (NNs) are very used, because they ...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
AbstractThe online gradient method has been widely used as a learning algorithm for neural networks....
Since the presentation of the backpropagation algorithm, a vast variety of improvements of the techn...
Abstract. An online gradient method for BP neural networks is pre-sented and discussed. The input tr...
The back propagation algorithm has been successfully applied to wide range of practical problems. Si...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
Asymptotic behavior of the online gradient algorithm with a constant step size employed for learning...
In this work, a novel and model-based artificial neural network (ANN) training method is developed s...
In neural networks, the accuracies of its networks are mainly relying on two important factors which...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
WOS: 000348408100004This paper presents a novel weight updating algorithm for training of multilayer...
Abstract—Conjugate gradient methods constitute an excellent choice for efficiently training large ne...
Abstract. A survey is presented on some recent developments on the convergence of online gradient me...
Nowadays, in the field of information processing, neural networks (NNs) are very used, because they ...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
AbstractThe online gradient method has been widely used as a learning algorithm for neural networks....
Since the presentation of the backpropagation algorithm, a vast variety of improvements of the techn...
Abstract. An online gradient method for BP neural networks is pre-sented and discussed. The input tr...
The back propagation algorithm has been successfully applied to wide range of practical problems. Si...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
Asymptotic behavior of the online gradient algorithm with a constant step size employed for learning...
In this work, a novel and model-based artificial neural network (ANN) training method is developed s...
In neural networks, the accuracies of its networks are mainly relying on two important factors which...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
WOS: 000348408100004This paper presents a novel weight updating algorithm for training of multilayer...
Abstract—Conjugate gradient methods constitute an excellent choice for efficiently training large ne...