Single hidden layer neural networks with supervised learning have been successfully applied to approximate unknown functions defined in compact functional spaces. The most advanced results also give rates of convergence, stipulating how many hidden neurons with a given activation function should be used to achieve a specific order of approximation. However, independently of the activation function employed, these connectionist models for function approximation suffer from a severe limitation: all hidden neurons use the same activation function. If each activation function of a hidden neuron is optimally defined for every approximation problem, then better rates of convergence will be achieved. This is exactly the purpose of constructive lea...
In the present work, a constructive learning algorithm is employed to design an optimal one-hidden n...
International audienceMany supervised machine learning methods are naturally cast as optimization pr...
It is well known that Artificial Neural Networks are universal approximators. The classical result ...
Determining network size used to require various ad hoc rules of thumb. In recent years, several res...
Abstract. We prove that neural networks with a single hidden layer are capable of providing an optim...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
The universal approximation capability exhibited by one-hidden layer neural networks is explored to ...
In recent years, multi-layer feedforward neural networks have been popularly used for pattern classi...
International audienceFeedforward neural networks have wide applicability in various disciplines of ...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
In the present work, a constructive learning algorithm is employed to design an optimal one-hidden l...
In this paper we characterize incremental approximation of discrete functions by using one-hidden-la...
We propose a constructive approach to building single-hidden-layer neural networks for nonlinear fun...
We consider a class of neural networks whose performance can be analyzed and geometrically visualize...
Approximation of highly nonlinear functions is an important area of computational intelligence. The ...
In the present work, a constructive learning algorithm is employed to design an optimal one-hidden n...
International audienceMany supervised machine learning methods are naturally cast as optimization pr...
It is well known that Artificial Neural Networks are universal approximators. The classical result ...
Determining network size used to require various ad hoc rules of thumb. In recent years, several res...
Abstract. We prove that neural networks with a single hidden layer are capable of providing an optim...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
The universal approximation capability exhibited by one-hidden layer neural networks is explored to ...
In recent years, multi-layer feedforward neural networks have been popularly used for pattern classi...
International audienceFeedforward neural networks have wide applicability in various disciplines of ...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
In the present work, a constructive learning algorithm is employed to design an optimal one-hidden l...
In this paper we characterize incremental approximation of discrete functions by using one-hidden-la...
We propose a constructive approach to building single-hidden-layer neural networks for nonlinear fun...
We consider a class of neural networks whose performance can be analyzed and geometrically visualize...
Approximation of highly nonlinear functions is an important area of computational intelligence. The ...
In the present work, a constructive learning algorithm is employed to design an optimal one-hidden n...
International audienceMany supervised machine learning methods are naturally cast as optimization pr...
It is well known that Artificial Neural Networks are universal approximators. The classical result ...