International audienceRing networks, a particular form of Hopfield neural networks, can be used in computational neurosciences in order to model the activity of place cells or head-direction cells. The behaviour of these models is highly dependent on their recurrent synaptic connectivity matrix and on individual neurons' activation function, which must be chosen appropriately to obtain physiologically meaningful conclusions. In this article, we propose some simpler ways to tune this synaptic connectivity matrix compared to existing literature so as to achieve stability in a ring attractor network with a piece-wise affine activation functions, and we link these results to the possible stable states the network can converge to
The brainmap project aims tomap out the neuron connections of the human brain. Even with all of the ...
We study pattern formation in a 2-population homogenized neural field model of the Hopfield type in ...
We examine the performance of Hebbian-like attractor neural net-works, recalling stored memory patte...
Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory ne...
Attractor neural networks such as the Hopfield model can be used to model associative memory. An eff...
This dissertation studies a one dimensional neural network rate model that supports localized self-s...
PosterInternational audienceNormal neural population activity and information encoding in the hippoc...
Many chronic human diseases are of unclear origin, and persist long beyond any known insult or insti...
We study the existence and stability of localized activity states in neuronal network models of feat...
International audienceLearning or memory formation are associated with the strengthening of the syna...
International audienceThis work provides theoretical conditions guaranteeing that a self-organizing ...
We present a novel class of dynamic neural networks that is capable of learning, in an unsupervised ...
Understanding structure-function relationships in the brain remains an important challenge in neuros...
Understanding the theoretical foundations of how memories are encoded and retrieved in neural popula...
The outer-product method for programming the Hopfield model is discussed. The method can result in m...
The brainmap project aims tomap out the neuron connections of the human brain. Even with all of the ...
We study pattern formation in a 2-population homogenized neural field model of the Hopfield type in ...
We examine the performance of Hebbian-like attractor neural net-works, recalling stored memory patte...
Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory ne...
Attractor neural networks such as the Hopfield model can be used to model associative memory. An eff...
This dissertation studies a one dimensional neural network rate model that supports localized self-s...
PosterInternational audienceNormal neural population activity and information encoding in the hippoc...
Many chronic human diseases are of unclear origin, and persist long beyond any known insult or insti...
We study the existence and stability of localized activity states in neuronal network models of feat...
International audienceLearning or memory formation are associated with the strengthening of the syna...
International audienceThis work provides theoretical conditions guaranteeing that a self-organizing ...
We present a novel class of dynamic neural networks that is capable of learning, in an unsupervised ...
Understanding structure-function relationships in the brain remains an important challenge in neuros...
Understanding the theoretical foundations of how memories are encoded and retrieved in neural popula...
The outer-product method for programming the Hopfield model is discussed. The method can result in m...
The brainmap project aims tomap out the neuron connections of the human brain. Even with all of the ...
We study pattern formation in a 2-population homogenized neural field model of the Hopfield type in ...
We examine the performance of Hebbian-like attractor neural net-works, recalling stored memory patte...