<p><b><i>A</i></b>, Scaling with unchanged learning parameters <i>β</i> and <i>λ</i>. Left, convergence rate (proportion of 100 networks that learned the saccade/antisaccade task). Error bars denote 95% confidence intervals. Right, median convergence speed (number of trials to criterion). <b><i>B</i></b>, Left, convergence rates with adjusted learning parameters. Bar shading indicates parameter setting (see legend in right panel). Right, median convergence speed with optimized parameters.</p
<p>(A) An example set of generative fields for unconstrained (left column) and normalized (right col...
<p>(<b>A</b>) Mean steady training and test accuracies (left and right, respectively; n = 100) of NN...
A, Schematic of a sparsely connected network with 3 hidden layers. The output layer is fully connect...
<p>The upper row shows how the proportion of networks that converged varies as function of <i>β</i> ...
<p><b><i>A</i></b>, Structure of the task, all possible trials have been illustrated. Fixation mark ...
<p><b><i>A</i></b>, Trials were subdivided in quintiles based on the log-likelihood ratio of the evi...
<p>(A): Bar graphs and error bars depict sample means and standard deviations both of which are calc...
<p><i>(A)</i> Multilayer modularity, <i>(B)</i> number of communities, and <i>(C)</i> mean flexibili...
Sharper increases in tconv correspond to larger average path lengths, although high clustering could...
<p>Convergence properties for single neurons (as in <a href="http://www.ploscompbiol.org/article/inf...
<p>Network had been trained 15 times for subject AA (on the left) and for subject K3B (on the right)...
<p>The scaling is done for <i>p</i> = 1 (black line) and <i>p</i> = 20 (red and blue lines). For <i>...
<p>(a) Regression based analysis of neuronal learning. Each row in the colormap shows an individual ...
<p>(A): Learning speed when , or . The bar graph and error bars depict sample means and standard dev...
<p>P&CC individuals learn significantly more associations, whether counting only when the associatio...
<p>(A) An example set of generative fields for unconstrained (left column) and normalized (right col...
<p>(<b>A</b>) Mean steady training and test accuracies (left and right, respectively; n = 100) of NN...
A, Schematic of a sparsely connected network with 3 hidden layers. The output layer is fully connect...
<p>The upper row shows how the proportion of networks that converged varies as function of <i>β</i> ...
<p><b><i>A</i></b>, Structure of the task, all possible trials have been illustrated. Fixation mark ...
<p><b><i>A</i></b>, Trials were subdivided in quintiles based on the log-likelihood ratio of the evi...
<p>(A): Bar graphs and error bars depict sample means and standard deviations both of which are calc...
<p><i>(A)</i> Multilayer modularity, <i>(B)</i> number of communities, and <i>(C)</i> mean flexibili...
Sharper increases in tconv correspond to larger average path lengths, although high clustering could...
<p>Convergence properties for single neurons (as in <a href="http://www.ploscompbiol.org/article/inf...
<p>Network had been trained 15 times for subject AA (on the left) and for subject K3B (on the right)...
<p>The scaling is done for <i>p</i> = 1 (black line) and <i>p</i> = 20 (red and blue lines). For <i>...
<p>(a) Regression based analysis of neuronal learning. Each row in the colormap shows an individual ...
<p>(A): Learning speed when , or . The bar graph and error bars depict sample means and standard dev...
<p>P&CC individuals learn significantly more associations, whether counting only when the associatio...
<p>(A) An example set of generative fields for unconstrained (left column) and normalized (right col...
<p>(<b>A</b>) Mean steady training and test accuracies (left and right, respectively; n = 100) of NN...
A, Schematic of a sparsely connected network with 3 hidden layers. The output layer is fully connect...