In error-driven distributed feedforward networks, new information typically interferes, sometimes severely, with previously learned information. We show how noise can be used to approximate the error surface of previously learned information. By combining this approximated error surface with the error surface associated with the new information to be learned, the network's retention of previously learned items can be improved and catastrophic interference significantly reduced. Further, we show that the noise-generated error surface is produced using only first-derivative information and without recourse to any explicit error information
Analysis of the error surfaces of feed-forward neural networks is complicated by the high dimensiona...
Neural networks that follow the Parallel Distributed Processing (PDP) paradigm suffer from catastrop...
Connectionist models of memory storage have been studied for many years, and aim to provide insight ...
In error-driven distributed feedforward networks, new information typi-cally interferes, sometimes s...
In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distribu...
We consider the problem of identifying the most influential nodes for a spreading process on a netwo...
Version abrégée en FrançaisInternational audienceGradient descent learning procedures are most often...
We study the ability of a Hopfield network with a Hebbian learning rule to extract meaningful inform...
Version abrégée en FrançaisInternational audienceGradient descent learning procedures are most often...
Version abrégée en FrançaisInternational audienceGradient descent learning procedures are most often...
Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.In this di...
Abstract: This paper deals with effect of digital noise to numerical stability of neural networks. D...
This paper describes further research on a learning procedure for layered networks of deterministic,...
<p>All network are of size <i>N</i> = 1000. <b>A</b>: Recall as a function of the load for different...
Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered netwo...
Analysis of the error surfaces of feed-forward neural networks is complicated by the high dimensiona...
Neural networks that follow the Parallel Distributed Processing (PDP) paradigm suffer from catastrop...
Connectionist models of memory storage have been studied for many years, and aim to provide insight ...
In error-driven distributed feedforward networks, new information typi-cally interferes, sometimes s...
In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distribu...
We consider the problem of identifying the most influential nodes for a spreading process on a netwo...
Version abrégée en FrançaisInternational audienceGradient descent learning procedures are most often...
We study the ability of a Hopfield network with a Hebbian learning rule to extract meaningful inform...
Version abrégée en FrançaisInternational audienceGradient descent learning procedures are most often...
Version abrégée en FrançaisInternational audienceGradient descent learning procedures are most often...
Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.In this di...
Abstract: This paper deals with effect of digital noise to numerical stability of neural networks. D...
This paper describes further research on a learning procedure for layered networks of deterministic,...
<p>All network are of size <i>N</i> = 1000. <b>A</b>: Recall as a function of the load for different...
Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered netwo...
Analysis of the error surfaces of feed-forward neural networks is complicated by the high dimensiona...
Neural networks that follow the Parallel Distributed Processing (PDP) paradigm suffer from catastrop...
Connectionist models of memory storage have been studied for many years, and aim to provide insight ...