Model-behavior correlations are plotted on the y-axis, as a function of the layer of AlexNet trained on ImageNet. Model-behavior correlations for the letter-trained features from Testolin et al. (2017) are plotted in orange. The shaded error range indicate the 95% confidence interval across bootstrapped samples of letter pairs. (TIFF)</p
(A, B). Gray curves show the Pearson correlation coefficients between the mean neural distances and ...
Dots depict the rank correlation of parameter estimates in one run with the mean across all other ru...
(A) Spearman rank correlation coefficients between IT and peak CNN layer similarities are shown for ...
A smaller architecture (see Methods) was trained on 26-way letter classification to create another m...
The letter-trained model shown here was trained on an image set with 550 typeset fonts per letter wi...
A. Fine-tuning. An object-trained network was fine-tuned on letters alone (yellow) or with objects a...
In this study, we model human letter-recognition times using neural networks that extract visual fea...
Here we list the model-behavior correlations (Spearman rho) of each neural network model. In additio...
A. Neural networks used to operationalize general object features and specialized letter features. A...
<p>All Features denotes the performance of a model trained as linear ensemble of models trained on i...
A. Decomposing model behavior into two metrics. We examined model behavior along two specific aspect...
<p>Mean decoding accuracies of the first PC of movement across cross-validations and sessions are di...
The group of red solid lines represents the performance of models trained on the augmented training ...
<p>The asterisks show the best significant predictor for each psychological factor. The values in th...
<p>Note. <i>r</i> = correlations between scores predicted by the model and score for that tweet from...
(A, B). Gray curves show the Pearson correlation coefficients between the mean neural distances and ...
Dots depict the rank correlation of parameter estimates in one run with the mean across all other ru...
(A) Spearman rank correlation coefficients between IT and peak CNN layer similarities are shown for ...
A smaller architecture (see Methods) was trained on 26-way letter classification to create another m...
The letter-trained model shown here was trained on an image set with 550 typeset fonts per letter wi...
A. Fine-tuning. An object-trained network was fine-tuned on letters alone (yellow) or with objects a...
In this study, we model human letter-recognition times using neural networks that extract visual fea...
Here we list the model-behavior correlations (Spearman rho) of each neural network model. In additio...
A. Neural networks used to operationalize general object features and specialized letter features. A...
<p>All Features denotes the performance of a model trained as linear ensemble of models trained on i...
A. Decomposing model behavior into two metrics. We examined model behavior along two specific aspect...
<p>Mean decoding accuracies of the first PC of movement across cross-validations and sessions are di...
The group of red solid lines represents the performance of models trained on the augmented training ...
<p>The asterisks show the best significant predictor for each psychological factor. The values in th...
<p>Note. <i>r</i> = correlations between scores predicted by the model and score for that tweet from...
(A, B). Gray curves show the Pearson correlation coefficients between the mean neural distances and ...
Dots depict the rank correlation of parameter estimates in one run with the mean across all other ru...
(A) Spearman rank correlation coefficients between IT and peak CNN layer similarities are shown for ...