This work presents two novel approaches to determine optimum growing multi-experts network (GMN) structure. The first method called direct method deals with expertise domain and levels in connection with local experts. The growing neural gas (GNG) algorithm is used to cluster the local experts. The concept of error distribution is used to apportion error among the local experts. After reaching the specified size of the network, redundant experts removal algorithm is invoked to prune the size of the network based on the ranking of the experts. However, GMN is not ergonomic due to too many network control parameters. Therefore, a self-regulating GMN (SGMN) algorithm is proposed. SGMN adopts self-adaptive learning rates for gradient-descent le...
This paper introduces a new fast, effective and practical model structure construction algorithm for...
This paper establishes a connection between a neurofuzzy network model with the Mixture of Experts N...
A typical feed forward neural network relies solely on its training algorithm, such as backprop or q...
An endeavor is made in this paper to describe a self-regulating constructive multi-model neural netw...
This paper deals with a novel idea of identification of nonlinear dynamic systems via a constructivi...
Mixture of Experts (MoE) is a classical architecture for ensembles where each member is specialised...
Neural networks are generally considered as function approximation models that map a set of input fe...
Most supervised neural networks are trained by minimizing the mean square error (MSE) of the trainin...
This report provides a comparative study of three proposed self-organising neural network models tha...
A Multi-agent system (MAS) is one which has a number of independent software agents interact with ea...
. We present a method for determining the globally optimal on-line learning rule for a soft committe...
A method is introduced that can directly acquire knowledge-engineered, rule-based logic in an adapti...
Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specif...
In this article the problem of clustering massive data sets, which are represented in the matrix for...
AbstractIn this study, we introduce a class of neural architectures of self-organizing neural networ...
This paper introduces a new fast, effective and practical model structure construction algorithm for...
This paper establishes a connection between a neurofuzzy network model with the Mixture of Experts N...
A typical feed forward neural network relies solely on its training algorithm, such as backprop or q...
An endeavor is made in this paper to describe a self-regulating constructive multi-model neural netw...
This paper deals with a novel idea of identification of nonlinear dynamic systems via a constructivi...
Mixture of Experts (MoE) is a classical architecture for ensembles where each member is specialised...
Neural networks are generally considered as function approximation models that map a set of input fe...
Most supervised neural networks are trained by minimizing the mean square error (MSE) of the trainin...
This report provides a comparative study of three proposed self-organising neural network models tha...
A Multi-agent system (MAS) is one which has a number of independent software agents interact with ea...
. We present a method for determining the globally optimal on-line learning rule for a soft committe...
A method is introduced that can directly acquire knowledge-engineered, rule-based logic in an adapti...
Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specif...
In this article the problem of clustering massive data sets, which are represented in the matrix for...
AbstractIn this study, we introduce a class of neural architectures of self-organizing neural networ...
This paper introduces a new fast, effective and practical model structure construction algorithm for...
This paper establishes a connection between a neurofuzzy network model with the Mixture of Experts N...
A typical feed forward neural network relies solely on its training algorithm, such as backprop or q...