Cohen\u27s kappa coefficient, κ, is a statistical measure of inter-rater agreement or inter-annotator agreement for qualitative items. In this paper, we focus on interval estimation of κ in the case of two raters and binary items. So far, only asymptotic and bootstrap intervals are available for κ due to its complexity. However, there is no guarantee that such intervals will capture κ with the desired nominal level 1-α. In other words, the statistical inferences based on these intervals are not reliable. We apply the Buehler method to obtain exact confidence intervals based on four widely used asymptotic intervals, three Wald-type confidence intervals and one interval constructed from a profile variance. These exact intervals are compared w...
Asymptotic and exact conditional approaches have often been used for testing agreement between two r...
This paper takes Brenner & Quan (The Statistician, 39, pp. 391-397) to task for their claim that a B...
The correlation coefficient (CC) is a standard measure of the linear association between two random ...
Cohen\u27s kappa coefficient, κ, is a statistical measure of inter-rater agreement or inter-annotato...
Cohen\u27s κ (1960) is almost universally used for the assessment of the strength of agreement among...
The command kapci calculates 100(1 - α)% confidence intervals for the kappa statistic using an analy...
interval estimation for the Bland–Altman limits of agreement with multiple observations per individu...
ABSTRACT In 1960, Cohen introduced the kappa coefficient to measure chance-corrected nominal scale a...
This paper presents a critical review of some kappa-type indices proposed in the literature to measu...
International audienceOne-sided confidence intervals are presented for the average of non-identical ...
Abstract. Cohen’s κ (kappa) is typically used as a measure of degree of rater agreement. It is often...
<p>Cohen's kappa and its 95% confidence intervals calculated for each RDT between duplicate devices ...
Cohen’s kappa coefficient has become a standard method for measuring the degree of agreement between...
We address the classic problem of interval estimation of a binomial proportion. The Wald interval p^...
Purpose The previous literature on Bland-Altman analysis only describes approximate methods for calc...
Asymptotic and exact conditional approaches have often been used for testing agreement between two r...
This paper takes Brenner & Quan (The Statistician, 39, pp. 391-397) to task for their claim that a B...
The correlation coefficient (CC) is a standard measure of the linear association between two random ...
Cohen\u27s kappa coefficient, κ, is a statistical measure of inter-rater agreement or inter-annotato...
Cohen\u27s κ (1960) is almost universally used for the assessment of the strength of agreement among...
The command kapci calculates 100(1 - α)% confidence intervals for the kappa statistic using an analy...
interval estimation for the Bland–Altman limits of agreement with multiple observations per individu...
ABSTRACT In 1960, Cohen introduced the kappa coefficient to measure chance-corrected nominal scale a...
This paper presents a critical review of some kappa-type indices proposed in the literature to measu...
International audienceOne-sided confidence intervals are presented for the average of non-identical ...
Abstract. Cohen’s κ (kappa) is typically used as a measure of degree of rater agreement. It is often...
<p>Cohen's kappa and its 95% confidence intervals calculated for each RDT between duplicate devices ...
Cohen’s kappa coefficient has become a standard method for measuring the degree of agreement between...
We address the classic problem of interval estimation of a binomial proportion. The Wald interval p^...
Purpose The previous literature on Bland-Altman analysis only describes approximate methods for calc...
Asymptotic and exact conditional approaches have often been used for testing agreement between two r...
This paper takes Brenner & Quan (The Statistician, 39, pp. 391-397) to task for their claim that a B...
The correlation coefficient (CC) is a standard measure of the linear association between two random ...