We present a new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit which is trained to learn the error of the network. The incremental step is repeated until the error of the current network can be considered as a noise. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. First experimental results point out the efficacy of this new incremental scheme. Current works deal with theoretical analysis and practical refinements of the algorithm. 1 Introduction One of the problems encountered when dealing with neural networks is to specify their architecture: number of layers, number of uni...
The importance of the problem of designing learning machines rests on the promise of one day deliver...
We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT). T...
Incremental learning from noisy data presents dual challenges: that of evaluating multiple hy-pothes...
. This paper describes a multi-layer incremental induction algorithm, MLII, which is linked to an ex...
We present a new type of constructive algorithm for incremental learning. The algorithm overcomes ma...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...
Conventional Neural Network (NN) training is done by introducing training patterns in the full input...
A new incremental network model for supervised learning is proposed. The model builds up a structure...
A classi er for discrete-valued variable classi cation problems is presented. The system utilizes an...
This paper presents DWINA: an algorithm for depth and width design of neural architectures in the ca...
A new strategy for incremental building of multilayer feedforward neural networks is proposed in the...
Most modern neural networks for classification fail to take into account the concept of the unknown....
[[abstract]]How to learn new knowledge without forgetting old knowledge is a key issue in designing ...
[[abstract]]In this paper, we applied the concepts of minimizing weight sensitivity cost and trainin...
Neural networks are generally exposed to a dynamic environment where the training patterns or the in...
The importance of the problem of designing learning machines rests on the promise of one day deliver...
We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT). T...
Incremental learning from noisy data presents dual challenges: that of evaluating multiple hy-pothes...
. This paper describes a multi-layer incremental induction algorithm, MLII, which is linked to an ex...
We present a new type of constructive algorithm for incremental learning. The algorithm overcomes ma...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...
Conventional Neural Network (NN) training is done by introducing training patterns in the full input...
A new incremental network model for supervised learning is proposed. The model builds up a structure...
A classi er for discrete-valued variable classi cation problems is presented. The system utilizes an...
This paper presents DWINA: an algorithm for depth and width design of neural architectures in the ca...
A new strategy for incremental building of multilayer feedforward neural networks is proposed in the...
Most modern neural networks for classification fail to take into account the concept of the unknown....
[[abstract]]How to learn new knowledge without forgetting old knowledge is a key issue in designing ...
[[abstract]]In this paper, we applied the concepts of minimizing weight sensitivity cost and trainin...
Neural networks are generally exposed to a dynamic environment where the training patterns or the in...
The importance of the problem of designing learning machines rests on the promise of one day deliver...
We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT). T...
Incremental learning from noisy data presents dual challenges: that of evaluating multiple hy-pothes...