In this paper, we present the convergence rate of the error in a neural network which was learnt by a constructive method. The constructive mechanism is used to learn the neural network by adding hidden units to this neural network. The main idea of this work is to find the eigenvalues of the transformation matrix concerning the error before and after adding hidden units in the neural network. By using the eigenvalues, we show the relation between the convergence rate in neural networks without and with thresholds in the output layer
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
This paper presents a mathematical analysis of the occurrence of temporary minima during training of...
This article deals with the determination of the rate of convergence to the unit of some neural netw...
Graduation date: 1990In this thesis, the reduction of neural networks is studied. A\ud new, largely ...
Determining network size used to require various ad hoc rules of thumb. In recent years, several res...
AbstractThis article deals with the determination of the rate of convergence to the unit of some neu...
Learning from data formalized as a minimization of a regularized empirical error is studied in terms...
Asymptotic behavior of the online gradient algorithm with a constant step size employed for learning...
A general convergence theorem is proposed for a family of serial and parallel nonmonotone unconstrai...
Batch training algorithms with a different learning rate for each weight are investigated. The adapt...
AbstractIn this paper we prove convergence rates for the problem of approximating functions f by neu...
This article deals with the determination of the rate of convergence to the unit of some neural netw...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this paper we define on-line algorithms for neural-network training, based on the construction of...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
This paper presents a mathematical analysis of the occurrence of temporary minima during training of...
This article deals with the determination of the rate of convergence to the unit of some neural netw...
Graduation date: 1990In this thesis, the reduction of neural networks is studied. A\ud new, largely ...
Determining network size used to require various ad hoc rules of thumb. In recent years, several res...
AbstractThis article deals with the determination of the rate of convergence to the unit of some neu...
Learning from data formalized as a minimization of a regularized empirical error is studied in terms...
Asymptotic behavior of the online gradient algorithm with a constant step size employed for learning...
A general convergence theorem is proposed for a family of serial and parallel nonmonotone unconstrai...
Batch training algorithms with a different learning rate for each weight are investigated. The adapt...
AbstractIn this paper we prove convergence rates for the problem of approximating functions f by neu...
This article deals with the determination of the rate of convergence to the unit of some neural netw...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this paper we define on-line algorithms for neural-network training, based on the construction of...