We consider novel methods for the computation of model selection criteria in missing-data problems based on the output of the EM algorithm. The methodology is very general and can be applied to numerous situations involving incomplete data within an EM framework, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Toward this goal, we develop a class of information criteria for missing-data problems, called ICH,Q, which yields the Akaike information criterion and the Bayesian information criterion as special cases. The computation of ICH,Q requires an analytic approximation to a complicated function, called the H-function, along with output from the EM algorithm ...
Owing to their complex design and use of live subjects as experimental units, missing or incomplete ...
Information criteria such as the Akaike information criterion (AIC) and Bayesian information criteri...
International audienceLogistic regression is a common classification method in supervised learning. ...
We consider novel methods for the computation of model selection criteria in missing-data problems b...
We consider the variable selection problem for a class of statistical models with missing data, incl...
We consider the variable selection problem for a class of statistical models with missing data, incl...
Application of classical model selection methods such as Akaike’s information cri-terion AIC becomes...
We propose an extension of the EM algorithm and its stochastic versions for the construction of inco...
We propose an extension of the EM algorithm and its stochastic versions for the construction of inco...
We propose an extension of the EM algorithm and its stochastic versions for the construction of inco...
This dissertation is composed of three papers which address the problem of variable selection for mo...
This dissertation is composed of three papers which address the problem of variable selection for mo...
Missingness often occurs in data arising from longitudinal studies, inducing imbalance in the sense ...
We propose a procedure associated with the idea of the E-M algorithm for model selection in the pres...
Model selection criteria in the presence of missing data based on the Kullback-Leibler discrepanc
Owing to their complex design and use of live subjects as experimental units, missing or incomplete ...
Information criteria such as the Akaike information criterion (AIC) and Bayesian information criteri...
International audienceLogistic regression is a common classification method in supervised learning. ...
We consider novel methods for the computation of model selection criteria in missing-data problems b...
We consider the variable selection problem for a class of statistical models with missing data, incl...
We consider the variable selection problem for a class of statistical models with missing data, incl...
Application of classical model selection methods such as Akaike’s information cri-terion AIC becomes...
We propose an extension of the EM algorithm and its stochastic versions for the construction of inco...
We propose an extension of the EM algorithm and its stochastic versions for the construction of inco...
We propose an extension of the EM algorithm and its stochastic versions for the construction of inco...
This dissertation is composed of three papers which address the problem of variable selection for mo...
This dissertation is composed of three papers which address the problem of variable selection for mo...
Missingness often occurs in data arising from longitudinal studies, inducing imbalance in the sense ...
We propose a procedure associated with the idea of the E-M algorithm for model selection in the pres...
Model selection criteria in the presence of missing data based on the Kullback-Leibler discrepanc
Owing to their complex design and use of live subjects as experimental units, missing or incomplete ...
Information criteria such as the Akaike information criterion (AIC) and Bayesian information criteri...
International audienceLogistic regression is a common classification method in supervised learning. ...