<div><p>ABSTRACT A key step for any modeling study is to compare model-produced estimates with observed/reliable data. The original index of agreement (also known as original Willmott index) has been widely used to measure how well model-produced estimates simulate observed data. However, in its original version such index may lead the user to erroneously select a predicting model. Therefore, this study compared the sensibility of the original index of agreement with its two newer versions (modified and refined) and provided an easy-to-use R-code capable of calculating these three indices. First, the sensibility of the indices was evaluated through Monte Carlo Experiments. These controlled simulations considered different sorts of errors (s...
<p>We show the prediction accuracy (that is, the fraction of correct rating predictions) as a functi...
The measurement and reporting of model error is of basic importance when constructing models. Here, ...
The degree of inter-rater agreement is usually assessed through (Formula presented.) -type coefficie...
ABSTRACT A key step for any modeling study is to compare model-produced estimates with observed/reli...
Assessing the accuracy of predictive models is critical because predictive models have been increasi...
The quality of subjective evaluations provided by field experts (e.g. physicians or risk assessors) ...
Quantifying how close two datasets are to each other is a common thing to do in scientific research....
The quality of subjective evaluations provided by field experts (e.g. physicians or risk assessors) ...
Variance-based interrater agreement indices in the rWG family are often interpreted using rules-of-t...
This paper presents a critical review of some kappa-type indices proposed in the literature to measu...
The evaluation of patterns in the residuals of model estimates vs. other variables can be useful in ...
<p>Pearson correlation coefficient (<i>r</i>) provides good information about the closeness of the r...
Agreement among raters is an important issue in medicine, as well as in education and psychology. Th...
Agreement among raters is an important issue in medicine, as well as in education and psychology. Th...
Agreement among raters is an important issue in medicine, as well as in education and psychology. Th...
<p>We show the prediction accuracy (that is, the fraction of correct rating predictions) as a functi...
The measurement and reporting of model error is of basic importance when constructing models. Here, ...
The degree of inter-rater agreement is usually assessed through (Formula presented.) -type coefficie...
ABSTRACT A key step for any modeling study is to compare model-produced estimates with observed/reli...
Assessing the accuracy of predictive models is critical because predictive models have been increasi...
The quality of subjective evaluations provided by field experts (e.g. physicians or risk assessors) ...
Quantifying how close two datasets are to each other is a common thing to do in scientific research....
The quality of subjective evaluations provided by field experts (e.g. physicians or risk assessors) ...
Variance-based interrater agreement indices in the rWG family are often interpreted using rules-of-t...
This paper presents a critical review of some kappa-type indices proposed in the literature to measu...
The evaluation of patterns in the residuals of model estimates vs. other variables can be useful in ...
<p>Pearson correlation coefficient (<i>r</i>) provides good information about the closeness of the r...
Agreement among raters is an important issue in medicine, as well as in education and psychology. Th...
Agreement among raters is an important issue in medicine, as well as in education and psychology. Th...
Agreement among raters is an important issue in medicine, as well as in education and psychology. Th...
<p>We show the prediction accuracy (that is, the fraction of correct rating predictions) as a functi...
The measurement and reporting of model error is of basic importance when constructing models. Here, ...
The degree of inter-rater agreement is usually assessed through (Formula presented.) -type coefficie...