In this paper, we propose a hybrid learning algorithm for the single hidden layer feedforward neural networks (SLFNs) for data classification. The proposed hybrid algorithm is a two-phase learning algorithm and is based on the quasisecant and the simulated annealing methods. First, the weights between the hidden layer and the output layer nodes (output layer weights) are adjusted by the quasisecant algorithm. Then the simulated annealing is applied for global attribute weighting. The weights between the input layer and the hidden layer nodes are fixed in advance and are not included in the learning process. The proposed two-phase learning of the network is a novel idea and is different from that of the existing ones. The numerical results o...
This paper proposes a learning framework for single-hidden layer feedforward neural networks (SLFN) ...
Training neural networks is a great significance of a difficult task in the field of supervised lear...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...
Backpropagation algorithm is a classical technique used in the training of the artificial neural net...
We propose a binary classifier based on the single hidden layer feedforward neural network (SLFN) us...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
We present a neural network architecture and a training algorithm designed to enable very rapid trai...
In this study, we focus on feed-forward neural networks with a single hidden layer. The research tou...
Since the introduction of the backpropagation algorithm as a learning rule for neural networks much ...
In this work, we propose a Hybrid particle swarm optimization-Simulated annealing algorithm and pres...
: This paper describes two algorithms based on cooperative evolution of internal hidden network repr...
Abstract. A novel multistage feedforward network is proposed for efficient solving of difficult clas...
This work introduces an alternative algorithm, simulated annealing, to minimize the prediction error...
Abstract-An extremely simple technique for training the weights of a feedforward multilayer neural n...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper proposes a learning framework for single-hidden layer feedforward neural networks (SLFN) ...
Training neural networks is a great significance of a difficult task in the field of supervised lear...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...
Backpropagation algorithm is a classical technique used in the training of the artificial neural net...
We propose a binary classifier based on the single hidden layer feedforward neural network (SLFN) us...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
We present a neural network architecture and a training algorithm designed to enable very rapid trai...
In this study, we focus on feed-forward neural networks with a single hidden layer. The research tou...
Since the introduction of the backpropagation algorithm as a learning rule for neural networks much ...
In this work, we propose a Hybrid particle swarm optimization-Simulated annealing algorithm and pres...
: This paper describes two algorithms based on cooperative evolution of internal hidden network repr...
Abstract. A novel multistage feedforward network is proposed for efficient solving of difficult clas...
This work introduces an alternative algorithm, simulated annealing, to minimize the prediction error...
Abstract-An extremely simple technique for training the weights of a feedforward multilayer neural n...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper proposes a learning framework for single-hidden layer feedforward neural networks (SLFN) ...
Training neural networks is a great significance of a difficult task in the field of supervised lear...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...