A) Here we analysed to what degree a model will learn a phase- versus a rate-coding solution, as a function of the training setup and initialisation. To quantify to what degree either solution is learned, we first computed for all units in a model the absolute difference between their (normalized) rate after stimulus a and their rate after stimuli b, as well as the absolute phase difference between trials with either stimulus. To get one measure of rate coding per model, we then calculated the mean over the absolute rate differences of all units, and similarly took the mean absolute phase differences as a measure of phase coding. We plot here in the first row these two measures for 6 models for each combination of the following conditions: ...
The figure shows the firing rate responses of output neuron #223 before training (A) and after train...
The learning rate is an information-theoretical quantity for bipartite Markov chains describing two ...
Equilibrium states of large layered neural networks with differentiable activation function and a si...
A) We here detail a simple rate-coding model that performs the working memory task. Such a model con...
A) RNNs receive transient stimuli as input, along with a reference oscillation. Networks are trained...
A) We also trained RNNs without rank constraint. For these networks initial entries in the recurrent...
We study the dynamics of on-line learning with time-correlated patterns. In this, we make a distinct...
<p>(<b>A</b>) Network illustration. A set of 3600 excitatory and 900 inhibitory recurrently connecte...
We found two qualitatively different solutions. A) In the first solution (panels A to C), we find th...
(A) Behavior of output neurons (MBONs) during first-order conditioning. During training, a CS+ (blue...
The problem of learning from examples in multilayer networks is studied within the framework of stat...
Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered netwo...
A) In order to study the connectivity of trained models, we fitted a mixture of Gaussians with 1 to ...
Learning curves show how a neural network is improved as the number of training examples increases a...
We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamic...
The figure shows the firing rate responses of output neuron #223 before training (A) and after train...
The learning rate is an information-theoretical quantity for bipartite Markov chains describing two ...
Equilibrium states of large layered neural networks with differentiable activation function and a si...
A) We here detail a simple rate-coding model that performs the working memory task. Such a model con...
A) RNNs receive transient stimuli as input, along with a reference oscillation. Networks are trained...
A) We also trained RNNs without rank constraint. For these networks initial entries in the recurrent...
We study the dynamics of on-line learning with time-correlated patterns. In this, we make a distinct...
<p>(<b>A</b>) Network illustration. A set of 3600 excitatory and 900 inhibitory recurrently connecte...
We found two qualitatively different solutions. A) In the first solution (panels A to C), we find th...
(A) Behavior of output neurons (MBONs) during first-order conditioning. During training, a CS+ (blue...
The problem of learning from examples in multilayer networks is studied within the framework of stat...
Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered netwo...
A) In order to study the connectivity of trained models, we fitted a mixture of Gaussians with 1 to ...
Learning curves show how a neural network is improved as the number of training examples increases a...
We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamic...
The figure shows the firing rate responses of output neuron #223 before training (A) and after train...
The learning rate is an information-theoretical quantity for bipartite Markov chains describing two ...
Equilibrium states of large layered neural networks with differentiable activation function and a si...