AbstractThe online gradient method has been widely used as a learning algorithm for neural networks. We establish a deterministic convergence of online gradient methods for the training of a class of nonlinear feedforward neural networks when the training examples are linearly independent. We choose the learning rate η to be a constant during the training procedure. The monotonicity of the error function in the iteration is proved. A criterion for choosing the learning rate η is also provided to guarantee the convergence. Under certain conditions similar to those for the classical gradient methods, an optimal convergence rate for our online gradient methods is proved
Gradient-following learning methods can encounter problems of implementation in many applications, ...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...
Abstract. A survey is presented on some recent developments on the convergence of online gradient me...
AbstractIn this paper, we study the convergence of an online gradient method for feed-forward neural...
Abstract This paper investigates an online gradient method with penalty for training feedforward neu...
Abstract. An online gradient method for BP neural networks is pre-sented and discussed. The input tr...
Asymptotic behavior of the online gradient algorithm with a constant step size employed for learning...
AbstractIn this paper, we study the convergence of an online gradient method for feed-forward neural...
AbstractIn this paper, we prove that the online gradient method for continuous perceptrons converges...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
AbstractIn this paper, we prove that the online gradient method for continuous perceptrons converges...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
Abstract Recurrent neural networks have been success-fully used for analysis and prediction of tempo...
Gradient-following learning methods can encounter problems of implementation in many applications, ...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...
Abstract. A survey is presented on some recent developments on the convergence of online gradient me...
AbstractIn this paper, we study the convergence of an online gradient method for feed-forward neural...
Abstract This paper investigates an online gradient method with penalty for training feedforward neu...
Abstract. An online gradient method for BP neural networks is pre-sented and discussed. The input tr...
Asymptotic behavior of the online gradient algorithm with a constant step size employed for learning...
AbstractIn this paper, we study the convergence of an online gradient method for feed-forward neural...
AbstractIn this paper, we prove that the online gradient method for continuous perceptrons converges...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
AbstractIn this paper, we prove that the online gradient method for continuous perceptrons converges...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
Abstract Recurrent neural networks have been success-fully used for analysis and prediction of tempo...
Gradient-following learning methods can encounter problems of implementation in many applications, ...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...