These are the simulation data that underly some of the figures in this paper: https://doi.org/10.1101/2021.10.20.465187 In particular, these data save the arguments, generalization metrics, and some other details from the simulations showing how generalization depends on the number of tasks. In most cases, they do not save the underlying model or samples from the underlying model -- though the code for training such models is provided in the github repository linked below. More information about these data and code for reading them and replotting the figures in the paper can be found here: https://github.com/wj2/disentangled</p
An important component for generalization in machine learning is to uncover underlying latent factor...
... shape representations, and receptive fields in neural network models on the grounds that first-o...
Technological advances in experimental neuroscience are generating vast quantities of data, from the...
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Com...
A common assumption about neural networks is that they can learn an appropriate internal representat...
We investigate the role of neurons within the internal computations of deep neural networks for comp...
Progress in science depends on the effective exchange of ideas among scientists. New ideas can be as...
In this work we study the distributed representations learnt by generative neural network models. In...
Artificial feed-forward neural networks are commonly used as a tool for modelling stimulus selection...
Summary: The results from the training of neural networks in v2 of Reinforced SciNet, published par...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
In our attempts to uncover the mechanisms that govern brain processing on the level of interacting n...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
A. Example encoding stage representations of novel object images. Each subtask consists of images of...
An important component for generalization in machine learning is to uncover underlying latent factor...
... shape representations, and receptive fields in neural network models on the grounds that first-o...
Technological advances in experimental neuroscience are generating vast quantities of data, from the...
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Com...
A common assumption about neural networks is that they can learn an appropriate internal representat...
We investigate the role of neurons within the internal computations of deep neural networks for comp...
Progress in science depends on the effective exchange of ideas among scientists. New ideas can be as...
In this work we study the distributed representations learnt by generative neural network models. In...
Artificial feed-forward neural networks are commonly used as a tool for modelling stimulus selection...
Summary: The results from the training of neural networks in v2 of Reinforced SciNet, published par...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
In our attempts to uncover the mechanisms that govern brain processing on the level of interacting n...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
A. Example encoding stage representations of novel object images. Each subtask consists of images of...
An important component for generalization in machine learning is to uncover underlying latent factor...
... shape representations, and receptive fields in neural network models on the grounds that first-o...
Technological advances in experimental neuroscience are generating vast quantities of data, from the...