The standard multi-layer perceptron (MLP) training algorithm implicitly assumes that equal numbers of examples are available to train each of the network classes. However, in many condition monitoring and fault diagnosis (CMFD) systems, data representing fault conditions can only be obtained with great difficulty: as a result, training classes may vary greatly in size, and the overall performance of an MLP classifier may be comparatively poor. We describe two techniques which can help ameliorate the impact of unequal training set sizes. We demonstrate the effectiveness of these techniques using simulated fault data representative of that found in a broad class of CMFD problems
We propose a new learning algorithm to enhance fault tolerance of multi-layer neural networks (MLN)....
Abstract. Typically the response of a multilayered perceptron (MLP) network on points which are far ...
In this contribution we present an algorithm for using possibly inaccurate knowledge of model deriva...
This paper focuses on the development of neural-based condition-monitoring and fault-diagnosis (CMFD...
As new microcontrollers and related processors have become available, it has become possible to crea...
In this paper, results are presented from a comprehensive series of studies aimed at assessing the s...
Firstly, the thesis addresses the problem caused by the limited availability of data for some classe...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
The standard implementation of the back-propagation training algorithm for multi-layer Perceptron (M...
Abstract-Due to the chaotic nature of multilayer perceptron training, training error usually fails t...
Multiple classifier systems or ensemble is an idea that is relevant both to neural computing and to ...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
The Multi-Layer Perceptron (MLP) is one of the most widely applied and researched Artificial Neural ...
We propose a new learning algorithm to enhance fault tolerance of multi-layer neural networks (MLN)....
We propose a new learning algorithm to enhance fault tolerance of multi-layer neural networks (MLN)....
Abstract. Typically the response of a multilayered perceptron (MLP) network on points which are far ...
In this contribution we present an algorithm for using possibly inaccurate knowledge of model deriva...
This paper focuses on the development of neural-based condition-monitoring and fault-diagnosis (CMFD...
As new microcontrollers and related processors have become available, it has become possible to crea...
In this paper, results are presented from a comprehensive series of studies aimed at assessing the s...
Firstly, the thesis addresses the problem caused by the limited availability of data for some classe...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
The standard implementation of the back-propagation training algorithm for multi-layer Perceptron (M...
Abstract-Due to the chaotic nature of multilayer perceptron training, training error usually fails t...
Multiple classifier systems or ensemble is an idea that is relevant both to neural computing and to ...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
The Multi-Layer Perceptron (MLP) is one of the most widely applied and researched Artificial Neural ...
We propose a new learning algorithm to enhance fault tolerance of multi-layer neural networks (MLN)....
We propose a new learning algorithm to enhance fault tolerance of multi-layer neural networks (MLN)....
Abstract. Typically the response of a multilayered perceptron (MLP) network on points which are far ...
In this contribution we present an algorithm for using possibly inaccurate knowledge of model deriva...