In this paper we study nonmonotone learning rules, based on an acceptability criterion for the calculated learning rate. More specifically, we impose that the error function value at each epoch must satisfy an Armijo-type criterion, with respect to the maximum error function value of a predetermined number of previous epochs. To test this approach, we propose two training algorithms with adaptive learning rates that employ the above-mentioned acceptability criterion. Experimental results show that the proposed algorithms have considerably improved convergence speed, success rate, and generalization, when compared with other classical neural network training methods
Up until the recent past, the power of multi layer feed forward artificial neural networks has been ...
Currently, the back-propagation is the most widely applied neural network algorithm at present. Howe...
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive ...
We present nonmonotone methods for feedforward neural network training, i.e., training methods in wh...
Backpropagation is a supervised learning algorithm for training multi-layer neural networks for func...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
A general convergence theorem is proposed for a family of serial and parallel nonmonotone unconstrai...
This paper is concerned with the problem of learning in networks where some or all of the functions ...
金沢大学大学院自然科学研究科情報システムOver the years, many improvements and refinements to the backpropagation learnin...
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., de...
Abstract — Over the years, many improvements and refine-ments of the backpropagation learning algori...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Analysis of a normalised backpropagation (NBP) algorithm employed in feed-forward multilayer nonline...
A stability criterion for learning is given. In the case of learning-rate adaptation of backpropagat...
Up until the recent past, the power of multi layer feed forward artificial neural networks has been ...
Currently, the back-propagation is the most widely applied neural network algorithm at present. Howe...
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive ...
We present nonmonotone methods for feedforward neural network training, i.e., training methods in wh...
Backpropagation is a supervised learning algorithm for training multi-layer neural networks for func...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
A general convergence theorem is proposed for a family of serial and parallel nonmonotone unconstrai...
This paper is concerned with the problem of learning in networks where some or all of the functions ...
金沢大学大学院自然科学研究科情報システムOver the years, many improvements and refinements to the backpropagation learnin...
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., de...
Abstract — Over the years, many improvements and refine-ments of the backpropagation learning algori...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Analysis of a normalised backpropagation (NBP) algorithm employed in feed-forward multilayer nonline...
A stability criterion for learning is given. In the case of learning-rate adaptation of backpropagat...
Up until the recent past, the power of multi layer feed forward artificial neural networks has been ...
Currently, the back-propagation is the most widely applied neural network algorithm at present. Howe...
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive ...