Abstract--An algorithm for efficient learning in feedforward networks is presented. Momentum acceleration is achieved by solving a constrained optimization problem using nonlinear programming techniques. In particular, minimization of the usual mean square error cost function is attempted under an additional condition for which the purpose is to optimize the alignment of the weight update vectors in successive epochs. The algorithm is applied to several benchmark training tasks (exclusive-or, encoder, multiplexer, and counter problems). Its performance, in terms of learning speed and scalability properties, is evaluated and found superior to the performance of reputedly fast variants of the back-propagation algorithm in the above benchmarks
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
Methods to speed up learning in back propagation and to optimize the network architecture have been ...
Till today, it has been a great challenge in optimizing the training time in neural networks. This p...
Abstract — Two backpropagation algorithms with momentum for feedforward neural networks with a singl...
In this paper a review of fast-learning algorithms for multilayer neural networks is presented. From...
Abstract—Since the presentation of the backpropagation algorithm, a vast variety of improvements of ...
Recently, the popularity of deep artificial neural networks has increased considerably. Generally, t...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...
International audienceWe describe a convergence acceleration scheme for multistep optimization algor...
In studies of neural networks, the Multilavered Feedforward Network is the most widely used network ...
A learning algorithm for feedforward neural networks is presented that is based on a parameter estim...
In this paper a general class of fast learning algorithms for feedforward neural networks is introdu...
The speed of convergence while training is an important consideration in the use of neural nets. The...
The momentum parameter is common within numerous optimization and local search algorithms, particula...
Abstract This paper investigates an online gradient method with penalty for training feedforward neu...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
Methods to speed up learning in back propagation and to optimize the network architecture have been ...
Till today, it has been a great challenge in optimizing the training time in neural networks. This p...
Abstract — Two backpropagation algorithms with momentum for feedforward neural networks with a singl...
In this paper a review of fast-learning algorithms for multilayer neural networks is presented. From...
Abstract—Since the presentation of the backpropagation algorithm, a vast variety of improvements of ...
Recently, the popularity of deep artificial neural networks has increased considerably. Generally, t...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...
International audienceWe describe a convergence acceleration scheme for multistep optimization algor...
In studies of neural networks, the Multilavered Feedforward Network is the most widely used network ...
A learning algorithm for feedforward neural networks is presented that is based on a parameter estim...
In this paper a general class of fast learning algorithms for feedforward neural networks is introdu...
The speed of convergence while training is an important consideration in the use of neural nets. The...
The momentum parameter is common within numerous optimization and local search algorithms, particula...
Abstract This paper investigates an online gradient method with penalty for training feedforward neu...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
Methods to speed up learning in back propagation and to optimize the network architecture have been ...
Till today, it has been a great challenge in optimizing the training time in neural networks. This p...