We study numerically the memory that forgets, introduced in 1986 by Parisi by bounding the synaptic strength, with a mechanism that avoids confusion; allows remembering the pattern learned more recently; and has a physiologically very well-defined meaning. We analyze a number of features of this learning for a finite number of neurons and finite number of patterns. We discuss how the system behaves in the large but finite N limit. We analyze the basin of attraction of the patterns that have been learned, and we show that it is exponentially small in the age of the pattern
Capacity limited memory systems need to gradually forget old information in order to avoid catastrop...
In standard attractor neural network models, specific patterns of activity are stored in the synapti...
Learning in a neuronal network is often thought of as a linear superposition of synaptic modificatio...
Attractor networks are an influential theory for memory storage in brain systems. This theory has re...
New experiences can be memorized by modifying the synaptic efficacies. Old memories are partially ov...
In this work we investigate from a computational perspective the efficiency of the Willshaw synaptic...
We study the learning of an external signal by a neural network and the time to forget it when this ...
SummaryThe ability to associate some stimuli while differentiating between others is an essential ch...
Memory is a fundamental part of computational systems like the human brain. Theoretical models ident...
A fundamental problem in neuroscience is understanding how working memory—the ability to store infor...
Recurrent neural networks have been shown to be able to store memory patterns as fixed point attract...
The persistent and graded activity often observed in cortical circuits is sometimes seen as a signat...
A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ...
We describe a mathematical model of learning and memory and apply it to the dynamics of forgetting a...
The ability to associate some stimuli while differentiating between others is an essential character...
Capacity limited memory systems need to gradually forget old information in order to avoid catastrop...
In standard attractor neural network models, specific patterns of activity are stored in the synapti...
Learning in a neuronal network is often thought of as a linear superposition of synaptic modificatio...
Attractor networks are an influential theory for memory storage in brain systems. This theory has re...
New experiences can be memorized by modifying the synaptic efficacies. Old memories are partially ov...
In this work we investigate from a computational perspective the efficiency of the Willshaw synaptic...
We study the learning of an external signal by a neural network and the time to forget it when this ...
SummaryThe ability to associate some stimuli while differentiating between others is an essential ch...
Memory is a fundamental part of computational systems like the human brain. Theoretical models ident...
A fundamental problem in neuroscience is understanding how working memory—the ability to store infor...
Recurrent neural networks have been shown to be able to store memory patterns as fixed point attract...
The persistent and graded activity often observed in cortical circuits is sometimes seen as a signat...
A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ...
We describe a mathematical model of learning and memory and apply it to the dynamics of forgetting a...
The ability to associate some stimuli while differentiating between others is an essential character...
Capacity limited memory systems need to gradually forget old information in order to avoid catastrop...
In standard attractor neural network models, specific patterns of activity are stored in the synapti...
Learning in a neuronal network is often thought of as a linear superposition of synaptic modificatio...