With two judges and a two-point rating scale, the test statistic for Kappa is the same as Pearson's chi-square statistic applied to the 2 × 2 table of paired observations. This equivalence allows a quick test of the null hypothesis of no agreement, as Pearson's chisquare statistic is much less cumbersome to compute than the Kappa statistic and its variance. A simple formula for the null hypothesis variance is also derived.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67443/2/10.1177_001316449205200105.pd
Replies to the comments made by Kenneth J. Zucker (see record 2014-12202-001) on the authors' origin...
. This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pa...
Editor: Cohen’s kappa is commonly used as a measure of chance-adjusted agreement. Warnings have been...
The chi-square test was compared with the Fisher exact test using Ns ranging from 3 to 69. Contrar...
The kappa statistic is commonly used for quantifying inter-rater agreement on a nominal scale. In th...
In medicine, before replacing an old device by a new one, we need to know whether the results of the...
Keywords: Goodness-of-fit test, Chi-square test, Overlapping m-tuple test, Serial test. One shortcom...
International audienceBy an appropriate hypothesis testing in a simulated example, one can show that...
The kappa statistic is frequently used to test interrater reliability. The importance of rater relia...
Chi-square test and the logic of hypothesis testing were developed by Karl Pearson. In this article ...
Abstract. Cohen’s kappa is presently a standard tool for the analysis of agreement in a 2×2 reliabil...
In large sample studies where distributions may be skewed and not readily transformed to sym-metry, ...
Measuring agreement between qualified experts is commonly used to determine the effec-tiveness of a ...
Karl Pearson's seminal article on the criterion is reviewed, formalized in modern notation and its m...
The Chi-Square test (χ2 test) is a family of tests based on a series of assumptions and is frequentl...
Replies to the comments made by Kenneth J. Zucker (see record 2014-12202-001) on the authors' origin...
. This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pa...
Editor: Cohen’s kappa is commonly used as a measure of chance-adjusted agreement. Warnings have been...
The chi-square test was compared with the Fisher exact test using Ns ranging from 3 to 69. Contrar...
The kappa statistic is commonly used for quantifying inter-rater agreement on a nominal scale. In th...
In medicine, before replacing an old device by a new one, we need to know whether the results of the...
Keywords: Goodness-of-fit test, Chi-square test, Overlapping m-tuple test, Serial test. One shortcom...
International audienceBy an appropriate hypothesis testing in a simulated example, one can show that...
The kappa statistic is frequently used to test interrater reliability. The importance of rater relia...
Chi-square test and the logic of hypothesis testing were developed by Karl Pearson. In this article ...
Abstract. Cohen’s kappa is presently a standard tool for the analysis of agreement in a 2×2 reliabil...
In large sample studies where distributions may be skewed and not readily transformed to sym-metry, ...
Measuring agreement between qualified experts is commonly used to determine the effec-tiveness of a ...
Karl Pearson's seminal article on the criterion is reviewed, formalized in modern notation and its m...
The Chi-Square test (χ2 test) is a family of tests based on a series of assumptions and is frequentl...
Replies to the comments made by Kenneth J. Zucker (see record 2014-12202-001) on the authors' origin...
. This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pa...
Editor: Cohen’s kappa is commonly used as a measure of chance-adjusted agreement. Warnings have been...