In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown that the proposed criteria are asymptotically unbiased. The influence functions of these criteria are also derived and some comments on robustness are provided
In the mixed modeling framework, Monte Carlo simulation and cross validation are em-ployed to develo...
A model selection criterion is often formulated by constructing an approx-imately unbiased estimator...
When it is acknowledged that all candidate parameterised statistical models are misspecified relativ...
Abstract: The aim of this work is to develop a new model selection criterion using a general discrep...
In model selection problems, robustness is one important feature for selecting an adequate model fro...
In this paper, we propose a new criterion for selection between nested models. We suppose that the c...
Version 17/04/2008 This article compares traditional Model Selection Criteria with the recently prop...
Although robust divergence, such as density power divergence and γ-divergence, is helpful for robust...
In this Master Thesis, we have analytically derived and numerically implemented three estimators of ...
The class of dual [phi]-divergence estimators (introduced in Broniatowski and Keziou (2009) [5]) is ...
The consequences of model misspecification for multinomial data when using minimum [phi]-divergence ...
This paper presents a model selection criterion in a composite likelihood framework based on density...
International audienceRecently, Azari et al (2006) showed that (AIC) criterion and its corrected ver...
AbstractThe problems of estimating parameters of statistical models for categorical data, and testin...
The problems of estimating parameters of statistical models for categorical data, and testing hypoth...
In the mixed modeling framework, Monte Carlo simulation and cross validation are em-ployed to develo...
A model selection criterion is often formulated by constructing an approx-imately unbiased estimator...
When it is acknowledged that all candidate parameterised statistical models are misspecified relativ...
Abstract: The aim of this work is to develop a new model selection criterion using a general discrep...
In model selection problems, robustness is one important feature for selecting an adequate model fro...
In this paper, we propose a new criterion for selection between nested models. We suppose that the c...
Version 17/04/2008 This article compares traditional Model Selection Criteria with the recently prop...
Although robust divergence, such as density power divergence and γ-divergence, is helpful for robust...
In this Master Thesis, we have analytically derived and numerically implemented three estimators of ...
The class of dual [phi]-divergence estimators (introduced in Broniatowski and Keziou (2009) [5]) is ...
The consequences of model misspecification for multinomial data when using minimum [phi]-divergence ...
This paper presents a model selection criterion in a composite likelihood framework based on density...
International audienceRecently, Azari et al (2006) showed that (AIC) criterion and its corrected ver...
AbstractThe problems of estimating parameters of statistical models for categorical data, and testin...
The problems of estimating parameters of statistical models for categorical data, and testing hypoth...
In the mixed modeling framework, Monte Carlo simulation and cross validation are em-ployed to develo...
A model selection criterion is often formulated by constructing an approx-imately unbiased estimator...
When it is acknowledged that all candidate parameterised statistical models are misspecified relativ...