AbstractThe increasing use of encoded medical data requires flexible tools for data quality assessment. Existing methods are not always adequate, and this paper proposes a new metric for inter-rater agreement of aggregated diagnostic data. The metric, which is applicable in prospective as well as retrospective coding studies, quantifies the variability in the coding scheme, and the variation can be differentiated in categories and in coders. Five alternative definitions were compared in a set of simulated coding situations and in the context of mortality statistics. Two of them were more effective, and the choice between them must be made according to the situation. The metric is more powerful for larger numbers of coded cases, and Type I e...
In a situation where two raters are classifying a series of observations, it is useful to have an in...
We consider the problem of assessing inter-rater agreement when there are missing data and a large n...
Objective For assessing interrater agreement, the concepts of observed agreement and specific agreem...
AbstractThe increasing use of encoded medical data requires flexible tools for data quality assessme...
Objective: To investigate the impact of coding variations on 'hospital standardized mortality ratio'...
Objective To investigate whether different measures of inter-rater reliability will compute similar ...
AbstractAgreement measures are used frequently in reliability studies that involve categorical data....
Background: Many methods under the umbrella of inter-rater agreement (IRA) have been proposed to eva...
International audienceMedical encoding support systems for diagnoses and medical procedures are an e...
Administrative data is increasingly used for the production of official statistics. However, adminis...
Inter-coder reliability is the most often used quantitative indicator of measurement quality in cont...
When an outcome is rated by several raters, ensuring consistency across raters increases the reliabi...
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is ...
3noAgreement measures are useful tools to both compare different evaluations of the same diagnostic ...
Background We consider the problem of assessing inter-rater agreement when there are missing data an...
In a situation where two raters are classifying a series of observations, it is useful to have an in...
We consider the problem of assessing inter-rater agreement when there are missing data and a large n...
Objective For assessing interrater agreement, the concepts of observed agreement and specific agreem...
AbstractThe increasing use of encoded medical data requires flexible tools for data quality assessme...
Objective: To investigate the impact of coding variations on 'hospital standardized mortality ratio'...
Objective To investigate whether different measures of inter-rater reliability will compute similar ...
AbstractAgreement measures are used frequently in reliability studies that involve categorical data....
Background: Many methods under the umbrella of inter-rater agreement (IRA) have been proposed to eva...
International audienceMedical encoding support systems for diagnoses and medical procedures are an e...
Administrative data is increasingly used for the production of official statistics. However, adminis...
Inter-coder reliability is the most often used quantitative indicator of measurement quality in cont...
When an outcome is rated by several raters, ensuring consistency across raters increases the reliabi...
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is ...
3noAgreement measures are useful tools to both compare different evaluations of the same diagnostic ...
Background We consider the problem of assessing inter-rater agreement when there are missing data an...
In a situation where two raters are classifying a series of observations, it is useful to have an in...
We consider the problem of assessing inter-rater agreement when there are missing data and a large n...
Objective For assessing interrater agreement, the concepts of observed agreement and specific agreem...