Findings from 162 researchers in 73 teams testing the same hypothesis with the same data reveal a universe of unique analytical possibilities leading to a broad range of results and conclusions. Surprisingly, the outcome variance mostly cannot be explained by variations in researchers’ modeling decisions or prior beliefs. Each of the 1,261 test models submitted by the teams was ultimately a unique combination of data-analytical steps. Because the noise generated in this crowdsourced research mostly cannot be explained using myriad meta-analytic methods, we conclude that idiosyncratic researcher variability is a threat to the reliability of scientific findings. This highlights the complexity and ambiguity inherent in the scientific data anal...
How does noise generated by researcher decisions undermine the credibility of science? We test this ...
This study explores how researchers’ analytical choices affect the reliability of scientific finding...
How does noise generated by researcher decisions undermine the credibility of science? We test this ...
This study explores how researchers’ analytical choices affect the reliability of scientific finding...
How does noise generated by researcher decisions undermine the credibility of science? We test this ...
This study explores how researchers’ analytical choices affect the reliability of scientific finding...