A: Accuracy for various combinations of N and P. Colors represent accuracy, grouped in bins spanning 3%. Lines are fitted to every group of binned accuracy values. f = 0.1, # epochs = 200 and number of trained networks (n): n = 4. B: Binary classification performance of sparse RNNs of various sizes. Each RNN classifies P = αfN2 inputs. f = 0.1, α = 1, # epochs = 500, n = 4. C: Sparse RNN performance as a function of the readout sparsity for RNNs with two different levels of recurrent sparsity. N = 100, α = 0.5, # epochs = 500, n = 6. D: Accuracy of sparse RNNs as a function of α = P/(fN2). Runs were truncated at 99% accuracy. N = 100, f = 0.05, # epochs = 5000, n = 4.</p
A, Schematic of a sparsely connected network with 3 hidden layers. The output layer is fully connect...
<p>Performance of all networks with increasing order/number of hidden nodes (a) MLP, (b) FLNN, (c) P...
Recently it has been shown that sparse neural networks perform better than dense networks with simil...
In addition to recurrent and output sparsity, we explored the effects of input connection sparsity. ...
A: Noise robustness heat maps for sparse RNNs. The horizontal axis indicates the level of dynamic no...
A: Accuracy for OS and OS+ methods at various combinations of N and P. Colors represent accuracy, gr...
Performance over eight variations of three working memory tasks from Wittig et al. 2016. Across all ...
A, Schematic of a network with dense feedforward and sparse recurrent connectivity. B, Learning impr...
<p>Pr1Rec, Pr10Rec, Pr50Rec, Pr80Rec represent precision at 1%, 10%, 50%, and 80% recall when all (<...
Classification accuracy of attention-based hybrid CNN-RNN architecture with with different numbers o...
<p>Classification accuracy as a function of network load (Lexicon size), averaged over 10 networks w...
Classification accuracy of RNN module with raw-signal, CNN module hybrid CNN-RNN and attention-based...
(A)–(C) Comparison of validation-set performance for the 3 different states improved, no-change and ...
Recurrent CNNs (a-c) were used as feature extractors in the classification task. (a, c) Feedforwards...
<p>Black dots represent the highest-performing network from each of the 100 experiments from both th...
A, Schematic of a sparsely connected network with 3 hidden layers. The output layer is fully connect...
<p>Performance of all networks with increasing order/number of hidden nodes (a) MLP, (b) FLNN, (c) P...
Recently it has been shown that sparse neural networks perform better than dense networks with simil...
In addition to recurrent and output sparsity, we explored the effects of input connection sparsity. ...
A: Noise robustness heat maps for sparse RNNs. The horizontal axis indicates the level of dynamic no...
A: Accuracy for OS and OS+ methods at various combinations of N and P. Colors represent accuracy, gr...
Performance over eight variations of three working memory tasks from Wittig et al. 2016. Across all ...
A, Schematic of a network with dense feedforward and sparse recurrent connectivity. B, Learning impr...
<p>Pr1Rec, Pr10Rec, Pr50Rec, Pr80Rec represent precision at 1%, 10%, 50%, and 80% recall when all (<...
Classification accuracy of attention-based hybrid CNN-RNN architecture with with different numbers o...
<p>Classification accuracy as a function of network load (Lexicon size), averaged over 10 networks w...
Classification accuracy of RNN module with raw-signal, CNN module hybrid CNN-RNN and attention-based...
(A)–(C) Comparison of validation-set performance for the 3 different states improved, no-change and ...
Recurrent CNNs (a-c) were used as feature extractors in the classification task. (a, c) Feedforwards...
<p>Black dots represent the highest-performing network from each of the 100 experiments from both th...
A, Schematic of a sparsely connected network with 3 hidden layers. The output layer is fully connect...
<p>Performance of all networks with increasing order/number of hidden nodes (a) MLP, (b) FLNN, (c) P...
Recently it has been shown that sparse neural networks perform better than dense networks with simil...