Abstract — Regularization is useful for extending learning models to be effective for classifications. Given the success of regularized-perceptron-based (one-layer neural network) methods, we introduced a similar kind of regularization for two global-optimum approaches recently proposed by Castillo et al, which combined the degree of freedom of using nonlinear transfer functions with the computational efficiency of solving complex problems. We focused on the two approaches that used sigmoid transfer functions. The first linear approach involved solving a set of linear equations, while the second min-max approach was reduced to a linear programming problem. We introduced regularization in such a way that the first linear approach remained li...
Abstract: In this paper we present a regularization approach to the training of all the network weig...
This paper proposes two new training algorithms for multilayer perceptrons based on evolutionary com...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...
this paper, we propose optimization methods explicitly applied to the nonlinear regularized problem ...
Abstract. In this paper we address the important problem of optimizing regularization parameters in ...
. In this paper we study how global optimization methods (like genetic algorithms) can be used to tr...
Regularization is an essential technique to improve generalization of neural networks. Traditionally...
The ultimate goal of this work is to provide a general global optimization method. Due to the diffic...
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with r...
Derivative free optimization methods have recently gained a lot of attractions for neural learning. ...
We propose a neural network approach for global optimization with applications to nonlinear least sq...
A method is developed for manually constructing recurrent artificial neural networks to model the fu...
We propose a neural network approach for global optimization with applications to nonlinear least sq...
Optimization plays a significant role in almost every field of applied sciences (e.g., signal proces...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
Abstract: In this paper we present a regularization approach to the training of all the network weig...
This paper proposes two new training algorithms for multilayer perceptrons based on evolutionary com...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...
this paper, we propose optimization methods explicitly applied to the nonlinear regularized problem ...
Abstract. In this paper we address the important problem of optimizing regularization parameters in ...
. In this paper we study how global optimization methods (like genetic algorithms) can be used to tr...
Regularization is an essential technique to improve generalization of neural networks. Traditionally...
The ultimate goal of this work is to provide a general global optimization method. Due to the diffic...
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with r...
Derivative free optimization methods have recently gained a lot of attractions for neural learning. ...
We propose a neural network approach for global optimization with applications to nonlinear least sq...
A method is developed for manually constructing recurrent artificial neural networks to model the fu...
We propose a neural network approach for global optimization with applications to nonlinear least sq...
Optimization plays a significant role in almost every field of applied sciences (e.g., signal proces...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
Abstract: In this paper we present a regularization approach to the training of all the network weig...
This paper proposes two new training algorithms for multilayer perceptrons based on evolutionary com...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...