In this paper one of the main open questions in the area of subspace methods is answered partly. One particular algorithm, sometimes termed CCA, is shown to be asymptotically equivalent to estimates obtained by minimizing the pseudo maximum likelihood. Here asymptotically equivalent means, that the difference of the two estimators times the square root of the sample size tends to zero
A large number of algorithms in machine learning, from principal component analysis (PCA), and its n...
We study sparse principal components analysis in high dimensions, where p (the number of variables) ...
AbstractThis paper discusses the asymptotic properties of estimators of ARMAX systems under weak low...
In this paper, we investigate the relation between a recently proposed subspace method based on pred...
In this paper, we shall consider a class of subspace algorithms for identification of linear time in...
This paper addresses subspace-based estimation and its pur-pose is to complement previously availabl...
We give new simple general expressions for the asymptotic covariance of the estimated system paramet...
In this paper the effect of some weighting matrices on the asymptotic variance of the estimates of l...
So called subspace methods for direct identification of linear state space models form a very useful...
This paper deals with subspace estimation in the small sample size regime, where the number of sampl...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...
We investigate the estimation efficiency of the central mean subspace in the framework of sufficient...
Bauer D, Deistler M, Scherrer W. The analysis of the asymptotic variance of subspace algorithms. In:...
In modern data analysis often the first step is to perform some data preprocessing, e.g. detrending ...
In completely specified models, where explicit formulae are derivable for the probabilities of obser...
A large number of algorithms in machine learning, from principal component analysis (PCA), and its n...
We study sparse principal components analysis in high dimensions, where p (the number of variables) ...
AbstractThis paper discusses the asymptotic properties of estimators of ARMAX systems under weak low...
In this paper, we investigate the relation between a recently proposed subspace method based on pred...
In this paper, we shall consider a class of subspace algorithms for identification of linear time in...
This paper addresses subspace-based estimation and its pur-pose is to complement previously availabl...
We give new simple general expressions for the asymptotic covariance of the estimated system paramet...
In this paper the effect of some weighting matrices on the asymptotic variance of the estimates of l...
So called subspace methods for direct identification of linear state space models form a very useful...
This paper deals with subspace estimation in the small sample size regime, where the number of sampl...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...
We investigate the estimation efficiency of the central mean subspace in the framework of sufficient...
Bauer D, Deistler M, Scherrer W. The analysis of the asymptotic variance of subspace algorithms. In:...
In modern data analysis often the first step is to perform some data preprocessing, e.g. detrending ...
In completely specified models, where explicit formulae are derivable for the probabilities of obser...
A large number of algorithms in machine learning, from principal component analysis (PCA), and its n...
We study sparse principal components analysis in high dimensions, where p (the number of variables) ...
AbstractThis paper discusses the asymptotic properties of estimators of ARMAX systems under weak low...