The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains. While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory proofing their learning capabilities. Here we introduce the first feedback alignment algorithm with provable learning guarantees. In contrast to existing work, we do not require any assumption about the size or depth of the network except that it has a single output neuron, i.e., such as for binary classification tasks. We show that our FA algorithm can deliver its theoretical promises in practice, surpassing the learning per...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...
The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alter...
Recent advances in deep neural networks (DNNs) owe their success to training algorithms that use bac...
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One...
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One...
The state-of-the art machine learning approach to training deep neural networks, backpropagation, is...
Motivated by the goal of enabling energy-efficient and/or lower-cost hardware implementations of dee...
While the backpropagation of error algorithm allowed for a rapid rise in the development and deploym...
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embed...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...
Recent works have examined theoretical and empirical properties of wide neural networks trained in t...
This paper presents some simple techniques to improve the backpropagation algorithm. Since learning ...
Several recent studies attempt to address the biological implausibility of the well-known backpropag...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...
The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alter...
Recent advances in deep neural networks (DNNs) owe their success to training algorithms that use bac...
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One...
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One...
The state-of-the art machine learning approach to training deep neural networks, backpropagation, is...
Motivated by the goal of enabling energy-efficient and/or lower-cost hardware implementations of dee...
While the backpropagation of error algorithm allowed for a rapid rise in the development and deploym...
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embed...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...
Recent works have examined theoretical and empirical properties of wide neural networks trained in t...
This paper presents some simple techniques to improve the backpropagation algorithm. Since learning ...
Several recent studies attempt to address the biological implausibility of the well-known backpropag...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
The brain processes information through multiple layers of neurons. This deep architecture is repres...
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical feature...