A mixed order hyper network (MOHN) is a neural network in which weights can connect any number of neurons, rather than the usual two. MOHNs can be used as content addressable memories with higher capacity than standard Hopfield networks. They can also be used for regression, clustering, classification, and as fitness models for use in heuristic optimisation. This paper presents a set of methods for estimating the values of the weights in a MOHN from training data. The different methods are compared to each other and to a standard MLP trained by back propagation and found to be faster to train than the MLP and more reliable as the error function does not contain local minima
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
Abstract. A mixed order associative neural network with n neurons and a modified Hebbian learning ru...
A mixed order hyper network (MOHN) is a neural network in which weights can connect any number of ne...
Background Mixed Order Hyper Networks (MOHNs) are a type of neural network in which the interaction...
A mixed order hyper network (MOHN) is a neural network in which weights can connect any number of ne...
A neural network with mixed order weights, n neurons and a modified Hebbian learning rule can learn ...
Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can als...
Most application work within neural computing continues to employ multi-layer perceptrons (MLP). Tho...
Neural networks are widely applied in research and industry. However, their broader application is h...
Neural networks are widely applied in research and industry. However, their broader application is h...
A multilayer perceptron is a feed forward artificial neural network model that maps sets of input da...
In many classification and prediction problems it is known that the response variable depends on cer...
ABSTRACT A new fast training algorithm for the Multilayer Perceptron (MLP) is proposed. This new alg...
The accuracy of neural networks can be improved when they are trained with discretized continuous at...
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
Abstract. A mixed order associative neural network with n neurons and a modified Hebbian learning ru...
A mixed order hyper network (MOHN) is a neural network in which weights can connect any number of ne...
Background Mixed Order Hyper Networks (MOHNs) are a type of neural network in which the interaction...
A mixed order hyper network (MOHN) is a neural network in which weights can connect any number of ne...
A neural network with mixed order weights, n neurons and a modified Hebbian learning rule can learn ...
Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can als...
Most application work within neural computing continues to employ multi-layer perceptrons (MLP). Tho...
Neural networks are widely applied in research and industry. However, their broader application is h...
Neural networks are widely applied in research and industry. However, their broader application is h...
A multilayer perceptron is a feed forward artificial neural network model that maps sets of input da...
In many classification and prediction problems it is known that the response variable depends on cer...
ABSTRACT A new fast training algorithm for the Multilayer Perceptron (MLP) is proposed. This new alg...
The accuracy of neural networks can be improved when they are trained with discretized continuous at...
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
Abstract. A mixed order associative neural network with n neurons and a modified Hebbian learning ru...