This paper presents a study of weight- and input-noise in feedforward network training algorithms. In theory for the optimal least-squares case noise can be modelled by a single cost function term. However we believe that such ideal conditions are uncommon in practice. Both first and second derivative terms are shown to have the potential to de-sensitize the trained network's outputs to weight- or input-corruption. Simulation experiments illustrate these points by comparing the ideal case with a more realistic real-world example. The results show that although the second derivative term can influence the network solution in the practical case, the first derivative term is dominant
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
A brief summary is given of recent results on the use of noise in the optimal training of neural net...
A brief summary is given of recent results on the use of noise in the optimal training of neural net...
Over decades, gradient descent has been applied to develop learning algorithm to train a neural netw...
We analyse the effects of analog noise on the synaptic arithmetic during MultiLayer Perceptron train...
Theoretical analysis of the error landscape of deep neural networks has garnered significant interes...
This study highlights on the subject of weight initialization in multi-layer feed-forward networks....
hertz norditadk It has been observed in numerical simulations that a weight decay can im prove gener...
Abstract—In this paper, we show that noise injection into inputs in un-supervised learning neural ne...
We consider the behaviour of the MLP in the presence of gross outliers in the training data. We show...
The training of multilayered neural networks in the presence of different types of noise is studied...
Injecting weight noise during training is a simple technique that has been proposed for almost two d...
There has been much interest in applying noise to feedforward neural networks in order to observe th...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
A brief summary is given of recent results on the use of noise in the optimal training of neural net...
A brief summary is given of recent results on the use of noise in the optimal training of neural net...
Over decades, gradient descent has been applied to develop learning algorithm to train a neural netw...
We analyse the effects of analog noise on the synaptic arithmetic during MultiLayer Perceptron train...
Theoretical analysis of the error landscape of deep neural networks has garnered significant interes...
This study highlights on the subject of weight initialization in multi-layer feed-forward networks....
hertz norditadk It has been observed in numerical simulations that a weight decay can im prove gener...
Abstract—In this paper, we show that noise injection into inputs in un-supervised learning neural ne...
We consider the behaviour of the MLP in the presence of gross outliers in the training data. We show...
The training of multilayered neural networks in the presence of different types of noise is studied...
Injecting weight noise during training is a simple technique that has been proposed for almost two d...
There has been much interest in applying noise to feedforward neural networks in order to observe th...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
A brief summary is given of recent results on the use of noise in the optimal training of neural net...
A brief summary is given of recent results on the use of noise in the optimal training of neural net...