AbstractIn the Gauss-Markov Model, weighted least-squares adjustment generates the BLUUE (Best Linear Uniformly Unbiased Estimator). Sometimes the requirement of unbiasedness is questioned as to its usefulness. Without it the corresponding principle of minimizing the mean square estimation error among all linear estimates, including the biased ones, leads to the BLE (Best Linear Estimator). Here we present a way to gradually soften the unbiasedness constraint in order to allow a continuous transition from BLUUE to BLE, thereby relying heavily on matrix algebra
AbstractThis article completes and simplifies earlier results on the derivation of best linear, or a...
The first essay gives a unified theory of several applications of quadratic minimization subject to ...
One of the prime goals of statistical estimation theory is the develop-ment of performance bounds wh...
Best linear unbiased estimators (BLUE’s) are known to be optimal in many respects under normal assum...
AbstractNew results in matrix algebra applied to the fundamental bordered matrix of linear estimatio...
New results in matrix algebra applied to the fundamental bordered matrix of linear estimation theory...
We consider a general Gauss-Markoff model (Y, Xβ, σ2V), where E(Y)=Xβ, D(Y)=σ2V. There may be defici...
This article completes and simplifies earlier results on the derivation of best linear, or affine, u...
In a standard linear model, we explore the optimality of the least squares estimator under assuption...
AbstractBiased estimators can outperform unbiased ones in terms of the mean square error (MSE). The ...
This article completes and simplifies earlier results on the derivation of best linear, or affine, u...
This note presents a set of conditions on the defining functions of regression parameter estimators o...
AbstractIn the general Gauss-Markoff model (Y, Xβ, σ2V), when V is singular, there exist linear func...
The first lecture in this series is devoted to a survey of contributions during the last five years ...
A broad definition is given of balanced data in mixed models. For all such models, it is shown that ...
AbstractThis article completes and simplifies earlier results on the derivation of best linear, or a...
The first essay gives a unified theory of several applications of quadratic minimization subject to ...
One of the prime goals of statistical estimation theory is the develop-ment of performance bounds wh...
Best linear unbiased estimators (BLUE’s) are known to be optimal in many respects under normal assum...
AbstractNew results in matrix algebra applied to the fundamental bordered matrix of linear estimatio...
New results in matrix algebra applied to the fundamental bordered matrix of linear estimation theory...
We consider a general Gauss-Markoff model (Y, Xβ, σ2V), where E(Y)=Xβ, D(Y)=σ2V. There may be defici...
This article completes and simplifies earlier results on the derivation of best linear, or affine, u...
In a standard linear model, we explore the optimality of the least squares estimator under assuption...
AbstractBiased estimators can outperform unbiased ones in terms of the mean square error (MSE). The ...
This article completes and simplifies earlier results on the derivation of best linear, or affine, u...
This note presents a set of conditions on the defining functions of regression parameter estimators o...
AbstractIn the general Gauss-Markoff model (Y, Xβ, σ2V), when V is singular, there exist linear func...
The first lecture in this series is devoted to a survey of contributions during the last five years ...
A broad definition is given of balanced data in mixed models. For all such models, it is shown that ...
AbstractThis article completes and simplifies earlier results on the derivation of best linear, or a...
The first essay gives a unified theory of several applications of quadratic minimization subject to ...
One of the prime goals of statistical estimation theory is the develop-ment of performance bounds wh...