This paper introduces a new method which employs the concept of “Orientation Vectors ” to train a feed forward neural network. It is shown that this method is suitable for problems where large di-mensions are involved and the clusters are characteristically sparse. For such cases, the new method is not NP hard as the problem size increases. We ‘derive ’ the present technique by starting from Kolmogrov’s method and then relax some of the stringent condi-tions. It is shown that for most classification problems three lay-ers are sufficient and the number of processing elements in the first layer depends on the number of clusters in the feature space. This paper explicitly demonstrates that for large dimension space as the number of clusters in...
Up to now many neural network models have been proposed. In our study we focus on two kinds of feedf...
AbstractThis paper is primarily oriented towards discrete mathematics and emphasizes the occurrence ...
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neur...
The goal of data mining is to solve various problems dealing with knowledge extraction from huge amo...
The training of multilayer perceptron is generally a difficult task. Excessive training times and la...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
I extend the class of exactly solvable feed-forward neural networks discussed in a previous publicat...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
The back propagation algorithm caused a tremendous breakthrough in the application of multilayer per...
This study highlights on the subject of weight initialization in multi-layer feed-forward networks....
In this study, we focus on feed-forward neural networks with a single hidden layer. The research tou...
In this thesis we investigate various aspects of the pattern recognition problem solving process. Pa...
International audienceDeep neural networks of sizes commonly encountered in practice are proven to c...
We provide novel guaranteed approaches for training feedforward neural networks with sparse connecti...
In this paper, an improved training algorithm based on the terminal attractor concept for feedforwar...
Up to now many neural network models have been proposed. In our study we focus on two kinds of feedf...
AbstractThis paper is primarily oriented towards discrete mathematics and emphasizes the occurrence ...
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neur...
The goal of data mining is to solve various problems dealing with knowledge extraction from huge amo...
The training of multilayer perceptron is generally a difficult task. Excessive training times and la...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
I extend the class of exactly solvable feed-forward neural networks discussed in a previous publicat...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
The back propagation algorithm caused a tremendous breakthrough in the application of multilayer per...
This study highlights on the subject of weight initialization in multi-layer feed-forward networks....
In this study, we focus on feed-forward neural networks with a single hidden layer. The research tou...
In this thesis we investigate various aspects of the pattern recognition problem solving process. Pa...
International audienceDeep neural networks of sizes commonly encountered in practice are proven to c...
We provide novel guaranteed approaches for training feedforward neural networks with sparse connecti...
In this paper, an improved training algorithm based on the terminal attractor concept for feedforwar...
Up to now many neural network models have been proposed. In our study we focus on two kinds of feedf...
AbstractThis paper is primarily oriented towards discrete mathematics and emphasizes the occurrence ...
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neur...