In this paper we discuss training of three-layer neural network classifiers by solving inequalities. Namely, first we represent each class by the center of the training data belonging to the class, and determine the set of hyperplanes that separate each class into a single region. Then according to whether the center is on the positive or negative side of the hyperplane, we determine the target values of each class for the hidden neurons. Since the convergence condition of the neural network classifier is now represented by the two sets of inequalities, we solve the sets successively by the Ho-Kashyap algorithm. We demonstrate the advantage of our method over the BP using three benchmark data sets.
Abstract--Since last decade, classification methods are useful in a wide range of applications. Clas...
WOS: 000348408100004This paper presents a novel weight updating algorithm for training of multilayer...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three la...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
This paper addresses the relationship between the number of hidden layer nodes In a neural network, ...
The concept of domains of recognition is introduced for three-layered neural networks. The domain li...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
This paper investigates neural network training as a potential source of problems for benchmarking c...
In this paper, the authors propose a new training algorithm which does not only rely upon the traini...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
An advanced method of training artificial neural networks is presented here which aims to identify t...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
Abstract--Since last decade, classification methods are useful in a wide range of applications. Clas...
WOS: 000348408100004This paper presents a novel weight updating algorithm for training of multilayer...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three la...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
This paper addresses the relationship between the number of hidden layer nodes In a neural network, ...
The concept of domains of recognition is introduced for three-layered neural networks. The domain li...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
This paper investigates neural network training as a potential source of problems for benchmarking c...
In this paper, the authors propose a new training algorithm which does not only rely upon the traini...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
An advanced method of training artificial neural networks is presented here which aims to identify t...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
Abstract--Since last decade, classification methods are useful in a wide range of applications. Clas...
WOS: 000348408100004This paper presents a novel weight updating algorithm for training of multilayer...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...