In this paper we present a reduced-gradient type algorithm for solving large-scale linearly constrained minimization problems. During each iteration of the algorithm linear systems are solved using a preconditioned conjugate-gradient scheme. The preconditioning scheme uses orthogonal transformations, thus providing numerical stability. The total storage used by the algorithm may be predicted before beginning the calculations. We present some numerical experiments which confirm the reliability of the algorithm. © 1991.1811731Murtagh, Saunders, Large-scale linearly constrained optimization (1978) Mathl Program, 14, pp. 41-72Murtagh, Saunders, (1977) MINOS user's guide. Technical Report, , 2nd edn., Dept of Operations Research, Stanford Univ, ...
We propose an automatic preconditioning scheme for large sparse numerical optimization. The strateg...
The original publication is available at www.springerlink.comThe efficient solution of large-scale l...
Our work under this support broadly falls into five categories: automatic differentiation, sparsity,...
This paper deals with background and practical experience with preconditioned gradient methods for s...
We propose a new framework for the application of preconditioned conjugate gradients in the solution...
In this thesis we propose new iteratively constructed preconditioners, to be paired with Conjugate G...
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization A...
AbstractLet A ε ℛm × n(with m ⩾ n and rank (A) = n) and b ε ℛm × 1 be given. Assume that an approxim...
This dissertation considers computational methods for solving linear and nonlinear least squares pro...
Um método frequentemente utilizado para a solução de problemas de programação linear é o método de p...
Includes bibliographical references (page 62)A new iterative method for the solution of large, spars...
AbstractThis paper presents a conjugate gradient method for solving systems of linear inequalities. ...
The focus of this thesis is to diagonally precondition on the limited memory quasi-Newton method for...
The computational aspects of the simplex algorithm are investigated, and high performance computing ...
A preconditioned steepest descent (SD) method for solving very large (with dimensions up to 106 ) un...
We propose an automatic preconditioning scheme for large sparse numerical optimization. The strateg...
The original publication is available at www.springerlink.comThe efficient solution of large-scale l...
Our work under this support broadly falls into five categories: automatic differentiation, sparsity,...
This paper deals with background and practical experience with preconditioned gradient methods for s...
We propose a new framework for the application of preconditioned conjugate gradients in the solution...
In this thesis we propose new iteratively constructed preconditioners, to be paired with Conjugate G...
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization A...
AbstractLet A ε ℛm × n(with m ⩾ n and rank (A) = n) and b ε ℛm × 1 be given. Assume that an approxim...
This dissertation considers computational methods for solving linear and nonlinear least squares pro...
Um método frequentemente utilizado para a solução de problemas de programação linear é o método de p...
Includes bibliographical references (page 62)A new iterative method for the solution of large, spars...
AbstractThis paper presents a conjugate gradient method for solving systems of linear inequalities. ...
The focus of this thesis is to diagonally precondition on the limited memory quasi-Newton method for...
The computational aspects of the simplex algorithm are investigated, and high performance computing ...
A preconditioned steepest descent (SD) method for solving very large (with dimensions up to 106 ) un...
We propose an automatic preconditioning scheme for large sparse numerical optimization. The strateg...
The original publication is available at www.springerlink.comThe efficient solution of large-scale l...
Our work under this support broadly falls into five categories: automatic differentiation, sparsity,...