Motivated by the problem of training multilayer perceptrons in neural networks, we con-sider the problem of minimizing E(x) = ∑ni=1 fi(ξi · x), where ξi ∈ Rs, 1 i n, and each fi(ξi · x) is a ridge function. We show that when n is small the problem of minimizing E can be treated as one of minimizing univariate functions, and we use the gradient algorithms for minimizing E when n is moderately large. For large n, we present the online gradient algorithms and especially show the monotonicity and weak convergence of the algorithms
We present a framework for calculating globally optimal parameters, within a given time frame, for o...
According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three la...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
This thesis addresses the issue of applying a "globally" convergent optimization scheme to the train...
Several neural network architectures have been developed over the past several years. One of the mos...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
The natural gradient descent method is applied to train an n-m-1 mul-tilayer perceptron. Based on an...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
A quick gradient training algorithm for a specific neural network structure called an extra reduced ...
The Nelder--Mead simplex algorithm (J. A. Nelder and R. Meade, Computer Journal, vol 7, pages 308-- ...
In this paper, we study the problem of minimizing a multilinear objective function over the discrete...
A quick gradient training algorithm for a specific neural network structure called an extra reduced ...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
We present a framework for calculating globally optimal parameters, within a given time frame, for o...
According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three la...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...
This thesis addresses the issue of applying a "globally" convergent optimization scheme to the train...
Several neural network architectures have been developed over the past several years. One of the mos...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
The natural gradient descent method is applied to train an n-m-1 mul-tilayer perceptron. Based on an...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
A quick gradient training algorithm for a specific neural network structure called an extra reduced ...
The Nelder--Mead simplex algorithm (J. A. Nelder and R. Meade, Computer Journal, vol 7, pages 308-- ...
In this paper, we study the problem of minimizing a multilinear objective function over the discrete...
A quick gradient training algorithm for a specific neural network structure called an extra reduced ...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
We present a framework for calculating globally optimal parameters, within a given time frame, for o...
According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three la...
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very diffic...