Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? Here, we demonstrate that these learning schemes indeed generalize to deep networks, if we include an attention net...
Animal learning is associated with changes in the efficacy of connections between neurons. The rules...
Abstract. Learning in the brain is associated with changes of connec-tion strengths between neurons....
Humans can learn several tasks in succession with minimal mutual interference but perform more poorl...
Researchers have proposed that deep learning, which is providing important progress in a wide range ...
Much recent work has focused on biologically plausible variants of supervised learning algorithms. H...
The brain processes information through many layers of neurons. This deep architecture is representa...
In the field of machine learning, ‘deep-learning’ has become spectacularly successful very rapidly, ...
Deep neural networks follow a pattern of connectivity that was loosely inspired by neurobiology. The...
Training deep neural networks with the error backpropagation algorithm is considered implausible fro...
This thesis investigates how general the knowledge stored in deep-Q-networks are. This general knowl...
Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and arti...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embed...
Neural Network models have received increased attention in the recent years. Aimed at achieving huma...
Animal learning is associated with changes in the efficacy of connections between neurons. The rules...
Abstract. Learning in the brain is associated with changes of connec-tion strengths between neurons....
Humans can learn several tasks in succession with minimal mutual interference but perform more poorl...
Researchers have proposed that deep learning, which is providing important progress in a wide range ...
Much recent work has focused on biologically plausible variants of supervised learning algorithms. H...
The brain processes information through many layers of neurons. This deep architecture is representa...
In the field of machine learning, ‘deep-learning’ has become spectacularly successful very rapidly, ...
Deep neural networks follow a pattern of connectivity that was loosely inspired by neurobiology. The...
Training deep neural networks with the error backpropagation algorithm is considered implausible fro...
This thesis investigates how general the knowledge stored in deep-Q-networks are. This general knowl...
Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and arti...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embed...
Neural Network models have received increased attention in the recent years. Aimed at achieving huma...
Animal learning is associated with changes in the efficacy of connections between neurons. The rules...
Abstract. Learning in the brain is associated with changes of connec-tion strengths between neurons....
Humans can learn several tasks in succession with minimal mutual interference but perform more poorl...