This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized φ-divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An application of the results obtained shows that a parametric bootstrap consistently estimates the null distribution of a certain class of test statistics for model misspecification detection. An illustrative application to the accuracy assessment of the thematic quality in a global land cover map is included.Ministerio de Economía y Competitivida
When identifying a model by a penalized minimum contrast procedure, we give a description of the ove...
Score-based divergences have been widely used in machine learning and statistics applications. Desp...
A multiplier bootstrap procedure for construction of likelihood-based confidence sets is considered ...
This paper focuses on the consequences of assuming a wrong model for multinomial data when using min...
The consequences of model misspecification for multinomial data when using minimum [phi]-divergence ...
summary:In this paper we present a simulation study to analyze the behavior of the $\phi $-divergenc...
AbstractThis paper investigates a new family of statistics based on Burbea–Rao divergence for testin...
The main purpose of this paper is to introduce and study the behavior of minimum (Formula presented....
AbstractIn the present work, the problem of estimating parameters of statistical models for categori...
AbstractIn this paper we consider categorical data that are distributed according to a multinomial, ...
In this paper, we study a bias reduced kernel density estimator and derive a nonparametric φ-diverge...
Since its introduction, the joint maximum likelihood (JML) has been widely used as an estimation met...
summary:In this paper we consider an exploratory canonical analysis approach for multinomial populat...
It will be shown that the power-divergence family of goodness-of-fit statistics for completely speci...
We study a Bayesian model where we have made specific requests about the parameter values to be esti...
When identifying a model by a penalized minimum contrast procedure, we give a description of the ove...
Score-based divergences have been widely used in machine learning and statistics applications. Desp...
A multiplier bootstrap procedure for construction of likelihood-based confidence sets is considered ...
This paper focuses on the consequences of assuming a wrong model for multinomial data when using min...
The consequences of model misspecification for multinomial data when using minimum [phi]-divergence ...
summary:In this paper we present a simulation study to analyze the behavior of the $\phi $-divergenc...
AbstractThis paper investigates a new family of statistics based on Burbea–Rao divergence for testin...
The main purpose of this paper is to introduce and study the behavior of minimum (Formula presented....
AbstractIn the present work, the problem of estimating parameters of statistical models for categori...
AbstractIn this paper we consider categorical data that are distributed according to a multinomial, ...
In this paper, we study a bias reduced kernel density estimator and derive a nonparametric φ-diverge...
Since its introduction, the joint maximum likelihood (JML) has been widely used as an estimation met...
summary:In this paper we consider an exploratory canonical analysis approach for multinomial populat...
It will be shown that the power-divergence family of goodness-of-fit statistics for completely speci...
We study a Bayesian model where we have made specific requests about the parameter values to be esti...
When identifying a model by a penalized minimum contrast procedure, we give a description of the ove...
Score-based divergences have been widely used in machine learning and statistics applications. Desp...
A multiplier bootstrap procedure for construction of likelihood-based confidence sets is considered ...