This paper presents and compares results for three types of connectionist networks on perceptual learning tasks: [A] Multi-layered converging networks of neuron-like units, with each unit connected to a small randomly chosen subset of units in the adjacent layers, that learn by re-weighting of their links; [B] Networks of neuron-like units structured into successively larger modules under brain-like topological constraints (such as layered, converging-diverging hierarchies and local receptive fields) that learn by re-weighting of their links; [C] Networks with brain-like structures that learn by generation-discovery, which involves the growth of links and recruiting of units in addition to reweighting of links. Preliminary empirical results...
An algorithm that learns from a set of examples should ideally be able to exploit the available reso...
. Learning when limited to modification of some parameters has a limited scope; the capability to mo...
This paper describes further research on a learning procedure for layered networks of deterministic,...
This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for...
Connectionist techniques are increasingly being used to model cognitive function with a view to prov...
This paper presents a new artificial neuron model capable of learning its receptive field in the top...
Ellerbrock TM. Multilayer neural networks : learnability, network generation, and network simplifica...
In this paper, we propose a new neural network architecture based on a family of referential multila...
this paper is structured as follows: in the following section, I will introduce constructive network...
One of the most striking feature of primate V1 is a topographic ordering of receptive fields. Previo...
International audienceWe show how a Hopfield network with modifiable recurrent connections undergoin...
The evidence from neurophysiological recordings from the primate visual system suggests that sensory...
Exploiting data invariances is crucial for efficient learning in both artificial and biological neur...
There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable a...
Connectionist approaches to cognitive modeling make use of large networks of simple computational u...
An algorithm that learns from a set of examples should ideally be able to exploit the available reso...
. Learning when limited to modification of some parameters has a limited scope; the capability to mo...
This paper describes further research on a learning procedure for layered networks of deterministic,...
This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for...
Connectionist techniques are increasingly being used to model cognitive function with a view to prov...
This paper presents a new artificial neuron model capable of learning its receptive field in the top...
Ellerbrock TM. Multilayer neural networks : learnability, network generation, and network simplifica...
In this paper, we propose a new neural network architecture based on a family of referential multila...
this paper is structured as follows: in the following section, I will introduce constructive network...
One of the most striking feature of primate V1 is a topographic ordering of receptive fields. Previo...
International audienceWe show how a Hopfield network with modifiable recurrent connections undergoin...
The evidence from neurophysiological recordings from the primate visual system suggests that sensory...
Exploiting data invariances is crucial for efficient learning in both artificial and biological neur...
There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable a...
Connectionist approaches to cognitive modeling make use of large networks of simple computational u...
An algorithm that learns from a set of examples should ideally be able to exploit the available reso...
. Learning when limited to modification of some parameters has a limited scope; the capability to mo...
This paper describes further research on a learning procedure for layered networks of deterministic,...