Memory is a fundamental part of computational systems like the human brain. Theoretical models identify memories as attractors of neural net-work activity patterns based on the theory that attractor (recurrent) neural networks are able to capture some crucial characteristics of memory, such as encoding, storage, retrieval, and long-term and working memory. In such networks, long-term storage of the memory patterns is enabled by synaptic strengths that are adjusted according to some activity-dependent plasticity mechanisms (of which the most widely recognized is the Heb-bian rule) such that the attractors of the network dynamics represent the stored memories. Most of previous studies on associative memory are focused on Hopfield-like binary ...
Introduction The associative memory is one of the fundamental algorithms of information processing ...
A general mean-field theory is presented for an attractor neural network in which each elementary un...
In this thesis, I show that a single class of unsupervised learning rules that can be inferred from ...
Attractor neural networks such as the Hopfield model can be used to model associative memory. An eff...
A fundamental problem in neuroscience is understanding how working memory—the ability to store infor...
International audienceIn this paper we summarize some of the main contributions of models of recurre...
This paper presents an Attractor Neural Network (ANN) model of Re-call and Recognition. It is shown ...
Attractor networks are an influential theory for memory storage in brain systems. This theory has re...
This work discusses some aspects of the relationship between connectivity and the capability to stor...
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying...
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying...
A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ...
The work of this thesis concerns how cortical memories are stored and retrieved. In particular, larg...
Copyright © 2015 Guoqi Li et al.This is an open access article distributed under the Creative Common...
In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cor...
Introduction The associative memory is one of the fundamental algorithms of information processing ...
A general mean-field theory is presented for an attractor neural network in which each elementary un...
In this thesis, I show that a single class of unsupervised learning rules that can be inferred from ...
Attractor neural networks such as the Hopfield model can be used to model associative memory. An eff...
A fundamental problem in neuroscience is understanding how working memory—the ability to store infor...
International audienceIn this paper we summarize some of the main contributions of models of recurre...
This paper presents an Attractor Neural Network (ANN) model of Re-call and Recognition. It is shown ...
Attractor networks are an influential theory for memory storage in brain systems. This theory has re...
This work discusses some aspects of the relationship between connectivity and the capability to stor...
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying...
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying...
A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ...
The work of this thesis concerns how cortical memories are stored and retrieved. In particular, larg...
Copyright © 2015 Guoqi Li et al.This is an open access article distributed under the Creative Common...
In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cor...
Introduction The associative memory is one of the fundamental algorithms of information processing ...
A general mean-field theory is presented for an attractor neural network in which each elementary un...
In this thesis, I show that a single class of unsupervised learning rules that can be inferred from ...