In this paper a neural network for approximating continuous and discontinuous mappings is described. The activation functions of the hidden nodes are the Radial Basis Functions (RBF) whose variances are learnt by means of an evolutionary optimization strategy. A new incremental learning strategy is used in order to improve the net performances. The learning strategy is able to save computational time because of the selective growing of the net structure and the capability of the learning algorithm to keep the effects of the activation functions local. Further, it does not require high order derivatives. An analysis of the learning capabilities and a comparison of the net performances with other approaches reported in literature have been pe...
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively ...
We analyze how radial basis functions are able to handle problems which are not linearly separable. ...
This dissertation presents a new strategy for the automatic design of neural networks. The learning ...
In this paper a neural network for approximating continuous and discontinuous mappings is described....
Abstract—A technique for approximating a continuous function of variables with a radial basis functi...
Most stochastic gradient descent algorithms can optimize neural networks that are sub-differentiable...
Most stochastic gradient descent algorithms can optimize neural networks that are sub-differentiable...
Abstract. We prove that neural networks with a single hidden layer are capable of providing an optim...
This dissertation studies neural networks for pattern classification and universal approximation. Th...
Abstract—A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification...
: Structure of incremental neural network (IncNet) is controlled by growing and pruning to match th...
Abstract:- Function approximation, which finds the underlying relationship from a given finite input...
This paper studies the computational power of various discontinuous real computational models that ...
Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and thes...
Abstract — Function approximation has been found in many applications. The radial basis function (RB...
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively ...
We analyze how radial basis functions are able to handle problems which are not linearly separable. ...
This dissertation presents a new strategy for the automatic design of neural networks. The learning ...
In this paper a neural network for approximating continuous and discontinuous mappings is described....
Abstract—A technique for approximating a continuous function of variables with a radial basis functi...
Most stochastic gradient descent algorithms can optimize neural networks that are sub-differentiable...
Most stochastic gradient descent algorithms can optimize neural networks that are sub-differentiable...
Abstract. We prove that neural networks with a single hidden layer are capable of providing an optim...
This dissertation studies neural networks for pattern classification and universal approximation. Th...
Abstract—A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification...
: Structure of incremental neural network (IncNet) is controlled by growing and pruning to match th...
Abstract:- Function approximation, which finds the underlying relationship from a given finite input...
This paper studies the computational power of various discontinuous real computational models that ...
Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and thes...
Abstract — Function approximation has been found in many applications. The radial basis function (RB...
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively ...
We analyze how radial basis functions are able to handle problems which are not linearly separable. ...
This dissertation presents a new strategy for the automatic design of neural networks. The learning ...