This work presents a new class of neural network models constrained by biological levels of sparsity and weight-precision, and employing only local weight updates. Concept learning is accomplished through the rapid recruitment of existing network knowledge – complex knowledge being realised as a combination of existing basis concepts. Prior network knowledge is here obtained through the random generation of feedforward networks, with the resulting concept library tailored through distributional bias to suit a particular target class. Learning is exclusively local – through supervised Hebbian and Winnow updates – avoiding the necessity for backpropagation of error and allowing remarkably rapid learning. The approach is demonstrated upon conc...
The problem of supervised learning can be phrased in terms of finding a good approximation to some u...
Predicting conditional probability densities with neural networks requires complex (at least two-hid...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...
This work presents a new class of neural network models constrained by biological levels of sparsity...
Random Neural Networks (RNNs) area classof Neural Networks (NNs) that can also be seen as a specific...
A basic neural model for Boolean computation is examined in the context of learning from examples. T...
The Aleksander model of neural networks replaces the connection weights of conventional models by lo...
International audienceRandom Neural Networks (RNNs) are a class of Neural Networks (NNs) that can al...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
The choice of dictionaries of computational units suitable for efficient computation of binary class...
Context of the tutorial: the IEEE CIS Summer School on Computational Intelligence and Applications (...
Sparse neural networks attract increasing interest as they exhibit comparable performance to their d...
We provide novel guaranteed approaches for training feedforward neural networks with sparse connecti...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
Artificial neural networks have, in recent years, been very successfully applied in a wide range of ...
The problem of supervised learning can be phrased in terms of finding a good approximation to some u...
Predicting conditional probability densities with neural networks requires complex (at least two-hid...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...
This work presents a new class of neural network models constrained by biological levels of sparsity...
Random Neural Networks (RNNs) area classof Neural Networks (NNs) that can also be seen as a specific...
A basic neural model for Boolean computation is examined in the context of learning from examples. T...
The Aleksander model of neural networks replaces the connection weights of conventional models by lo...
International audienceRandom Neural Networks (RNNs) are a class of Neural Networks (NNs) that can al...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
The choice of dictionaries of computational units suitable for efficient computation of binary class...
Context of the tutorial: the IEEE CIS Summer School on Computational Intelligence and Applications (...
Sparse neural networks attract increasing interest as they exhibit comparable performance to their d...
We provide novel guaranteed approaches for training feedforward neural networks with sparse connecti...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
Artificial neural networks have, in recent years, been very successfully applied in a wide range of ...
The problem of supervised learning can be phrased in terms of finding a good approximation to some u...
Predicting conditional probability densities with neural networks requires complex (at least two-hid...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...