Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature
This is the final version. Available from the American Economic Association via the DOI in this reco...
<div><p>Background</p><p>The <i>p</i> value obtained from a significance test provides no informatio...
Publication bias threatens the validity of meta-analytic results and leads to overestimation of the ...
Journals tend to publish only statistically significant evidence, creating a scientific record that ...
Journals tend to publish only statistically significant evidence, creating a scientific record that ...
Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that “work...
When studies examine true effects, they generate right-skewed p-curves, distributions of statistical...
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analy...
Masicampo and Lalande (2012; M&L) assessed the distribution of 3627 exactly calculated p-values betw...
Abstract: Publication bias hampers the estimation of true effect sizes. Specifically, effect sizes a...
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in t...
AbstractMethodology described by Francis in “Replication, Statistical Consistency and Publication Bi...
Studies suggest a bias against the publication of null (p \u3e .05) results. Instead of significance...
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in t...
Previously observed negative correlations between sample size and effect size (n-ES correlation) in ...
This is the final version. Available from the American Economic Association via the DOI in this reco...
<div><p>Background</p><p>The <i>p</i> value obtained from a significance test provides no informatio...
Publication bias threatens the validity of meta-analytic results and leads to overestimation of the ...
Journals tend to publish only statistically significant evidence, creating a scientific record that ...
Journals tend to publish only statistically significant evidence, creating a scientific record that ...
Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that “work...
When studies examine true effects, they generate right-skewed p-curves, distributions of statistical...
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analy...
Masicampo and Lalande (2012; M&L) assessed the distribution of 3627 exactly calculated p-values betw...
Abstract: Publication bias hampers the estimation of true effect sizes. Specifically, effect sizes a...
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in t...
AbstractMethodology described by Francis in “Replication, Statistical Consistency and Publication Bi...
Studies suggest a bias against the publication of null (p \u3e .05) results. Instead of significance...
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in t...
Previously observed negative correlations between sample size and effect size (n-ES correlation) in ...
This is the final version. Available from the American Economic Association via the DOI in this reco...
<div><p>Background</p><p>The <i>p</i> value obtained from a significance test provides no informatio...
Publication bias threatens the validity of meta-analytic results and leads to overestimation of the ...