Channeling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results*

Resource type
Journal Article
Author/contributor
Title
Channeling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results*
Abstract
I follow R. A. Fisher'sThe Design of Experiments (1935), using randomization statistical inference to test the null hypothesis of no treatment effects in a comprehensive sample of 53 experimental papers drawn from the journals of the American Economic Association. In the average paper, randomization tests of the significance of individual treatment effects find 13% to 22% fewer significant results than are found using authors’ methods. In joint tests of multiple treatment effects appearing together in tables, randomization tests yield 33% to 49% fewer statistically significant results than conventional tests. Bootstrap and jackknife methods support and confirm the randomization results.
Publication
The Quarterly Journal of Economics
Volume
134
Issue
2
Pages
557-598
Date
May 1, 2019
Journal Abbr
The Quarterly Journal of Economics
ISSN
0033-5533
Short Title
Channeling Fisher
Accessed
16/01/2022, 19:06
Library Catalogue
Silverchair
Citation
Young, A. (2019). Channeling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results*. The Quarterly Journal of Economics, 134(2), 557–598. https://doi.org/10.1093/qje/qjy029