Back Propagation and its variations are widely used as methods for training artificial neural networks. One such variation, Resilient Back Propagation (RPROP), has proven to be one of the best in terms of speed of convergence. Our SARPROP enhancement, based on Simulated Annealing, is described in this paper and is shown to increase the rate of convergence for some problems. The extension involves two complementary modifications: weight constraints early in training combine with noise to force the network to perform a more thorough search of the initial weight space, before allowing the network to refine its solutions as training continues. INTRODUCTION There have been a number of refinements made to the BP algorithm (Tollenaere, 1990; Jac...
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer visi...
Artificial Neural Network (ANN) can be trained using back propagation (BP). It is the most widely us...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
This paper examines conditions under which the Resilient Propagation-Rprop algorithm fails to conver...
In this paper, a new learning algorithm, RPROP, is proposed. To overcome the inherent disadvantages ...
Many algorithms have been proposed in order to train Radial Basis Function (RBF) networks. In this p...
this report also have been published on ESANN '93 [Schiffmann et al., 1993]. The dataset used i...
ral Network (SNN) versions of Resilient Propagation (RProp) and QuickProp, both training methods use...
In this paper, a new globally convergent modification of the Resilient Propagation-Rprop algorithm i...
Abstract—Back propagation is one of the well known training algorithms for multilayer perceptron. Ho...
In this paper we compare the performance of back propagation and resilient propagation algorithms in...
Abstract-The Back-propagation (BP) training algorithm is a renowned representative of all iterative ...
This paper presents some simple techniques to improve the backpropagation algorithm. Since learning ...
Looking at the training stage of error-backpropagation algorithm as an optimization problem, in this...
Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural netwo...
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer visi...
Artificial Neural Network (ANN) can be trained using back propagation (BP). It is the most widely us...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
This paper examines conditions under which the Resilient Propagation-Rprop algorithm fails to conver...
In this paper, a new learning algorithm, RPROP, is proposed. To overcome the inherent disadvantages ...
Many algorithms have been proposed in order to train Radial Basis Function (RBF) networks. In this p...
this report also have been published on ESANN '93 [Schiffmann et al., 1993]. The dataset used i...
ral Network (SNN) versions of Resilient Propagation (RProp) and QuickProp, both training methods use...
In this paper, a new globally convergent modification of the Resilient Propagation-Rprop algorithm i...
Abstract—Back propagation is one of the well known training algorithms for multilayer perceptron. Ho...
In this paper we compare the performance of back propagation and resilient propagation algorithms in...
Abstract-The Back-propagation (BP) training algorithm is a renowned representative of all iterative ...
This paper presents some simple techniques to improve the backpropagation algorithm. Since learning ...
Looking at the training stage of error-backpropagation algorithm as an optimization problem, in this...
Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural netwo...
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer visi...
Artificial Neural Network (ANN) can be trained using back propagation (BP). It is the most widely us...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...