AbstractThe process of modifying least squares computations by updating the covariance matrix has been used in control and signal processing for some time in the context of linear sequential filtering. Here we give an alternative derivation of the process and provide extensions to downdating. Our purpose is to develop algorithms that are amenable to implementation on modern multiprocessor architectures. In particular, the inverse Cholesky factor R−1 is considered and it is shown that R−1 can be updated (downdated) by applying the same sequence of orthogonal (hyperbolic) plane rotations that are used to update (downdate) R. We have attempted to provide some new insights into least squares modification processes and to suggest parallel algori...
The study of part 1 concerned with least-squares estimation, identification and prediction is extend...
Journal ArticleThe purpose of this paper is to introduce an adaptive nonlinear digital filtering alg...
The rate of convergence and the computational complexity of an adaptive algorithm are two essential ...
AbstractThe process of modifying least squares computations by updating the covariance matrix has be...
Computationally efficient parallel algorithms for downdating the least squares estimator of the ordi...
AbstractThe application of hyperbolic plane rotations to the least squares downdating problem arisin...
AbstractA new inverse factorization technique is presented for solving linear prediction problems ar...
AbstractA new method for the weighted linear least squares problem min y‖M1/2(b−Ax)‖2 is presented b...
We propose a new class of hyperbolic Gram-Schmidt methods to simultaneously update and downdate the ...
In this paper we study how to update the solution of the linear system Ax = b after the matrix A is ...
Usage of the Sherman-Morrison-Woodbury formula to update linear systems after low rank modifications...
sequential algorithms are developed for solution of the linear system problems concerned with optima...
AbstractHouseholder reflections applied from the left are generally used to zero a contiguous sequen...
In this paper, first a brief review is given of a fully pipelined algorithm for recursive least squa...
Caption titleIncludes bibliographical references (leaves 16-18).Supported by the NSF. 9300494-DMIby ...
The study of part 1 concerned with least-squares estimation, identification and prediction is extend...
Journal ArticleThe purpose of this paper is to introduce an adaptive nonlinear digital filtering alg...
The rate of convergence and the computational complexity of an adaptive algorithm are two essential ...
AbstractThe process of modifying least squares computations by updating the covariance matrix has be...
Computationally efficient parallel algorithms for downdating the least squares estimator of the ordi...
AbstractThe application of hyperbolic plane rotations to the least squares downdating problem arisin...
AbstractA new inverse factorization technique is presented for solving linear prediction problems ar...
AbstractA new method for the weighted linear least squares problem min y‖M1/2(b−Ax)‖2 is presented b...
We propose a new class of hyperbolic Gram-Schmidt methods to simultaneously update and downdate the ...
In this paper we study how to update the solution of the linear system Ax = b after the matrix A is ...
Usage of the Sherman-Morrison-Woodbury formula to update linear systems after low rank modifications...
sequential algorithms are developed for solution of the linear system problems concerned with optima...
AbstractHouseholder reflections applied from the left are generally used to zero a contiguous sequen...
In this paper, first a brief review is given of a fully pipelined algorithm for recursive least squa...
Caption titleIncludes bibliographical references (leaves 16-18).Supported by the NSF. 9300494-DMIby ...
The study of part 1 concerned with least-squares estimation, identification and prediction is extend...
Journal ArticleThe purpose of this paper is to introduce an adaptive nonlinear digital filtering alg...
The rate of convergence and the computational complexity of an adaptive algorithm are two essential ...