In the modern digital economy, optimal decision support systems, as well as machine learning systems, are becoming an integral part of production processes. Artificial neural network training as well as other engineering problems generate such problems of high dimension that are difficult to solve with traditional gradient or conjugate gradient methods. Relaxation subgradient minimization methods (RSMMs) construct a descent direction that forms an obtuse angle with all subgradients of the current minimum neighborhood, which reduces to the problem of solving systems of inequalities. Having formalized the model and taking into account the specific features of subgradient sets, we reduced the problem of solving a system of inequalities to an a...
Analyzes the long-term behavior of the REINFORCE and related algorithms (Williams, 1986, 1988, 1992)...
The important feature of this work is the combination of minimizing a function with desirable proper...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Technique...
A new mathematical approach for deriving learning algorithms for various neural network models inclu...
AbstractThis paper introduces a set of new algorithms, called the Space-Decomposition Minimization (...
Subgradient method and bundle methods are frequently used in Lagrangian relaxation for integer optim...
Abstract. The subgradient method is used frequently to optimize dual functions in Lagrangian relaxat...
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate i...
Abstract. The least mean squares (LMS) method for linear least squares problems differs from the ste...
The objectives of this study are the analysis and design of efficient computational methods for deep...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
International audienceWe present machine learning techniques to predict parameters of Lagrangian Rel...
AbstractThis work presents the restricted gradient-descent (RGD) algorithm, a training method for lo...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Analyzes the long-term behavior of the REINFORCE and related algorithms (Williams, 1986, 1988, 1992)...
The important feature of this work is the combination of minimizing a function with desirable proper...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Technique...
A new mathematical approach for deriving learning algorithms for various neural network models inclu...
AbstractThis paper introduces a set of new algorithms, called the Space-Decomposition Minimization (...
Subgradient method and bundle methods are frequently used in Lagrangian relaxation for integer optim...
Abstract. The subgradient method is used frequently to optimize dual functions in Lagrangian relaxat...
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate i...
Abstract. The least mean squares (LMS) method for linear least squares problems differs from the ste...
The objectives of this study are the analysis and design of efficient computational methods for deep...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
International audienceWe present machine learning techniques to predict parameters of Lagrangian Rel...
AbstractThis work presents the restricted gradient-descent (RGD) algorithm, a training method for lo...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Analyzes the long-term behavior of the REINFORCE and related algorithms (Williams, 1986, 1988, 1992)...
The important feature of this work is the combination of minimizing a function with desirable proper...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...