Nonstandard Errors

Albert J. Menkveld*, Anna Dreber, Felix Holzmeister, Juergen Huber, Magnus Johannesson, Michael Kirchler, Sebastian Neusüß, Michael Razen, Utz Weitzel, David Abad-Díaz, Menachem Abudy, Tobias Adrian, Yacine Ait-Sahalia, Olivier Akmansoy, Jamie T. Alcock, Vitali Alexeev, Arash Aloosh, Livia Amato, Diego Amaya, James J. AngelJian Chen, Teodor Duevski, Petri Jylhä, Markku Kaustia, Yijie Li, Matthijs Lof, Kalle Rinne, Paul Rintamäki, Hai Tran, Wenjia Yu, Xiaoyu Zhang

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

17 Citations (Scopus)
33 Downloads (Pure)

Abstract

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.

Original languageEnglish
Pages (from-to)2339-2390
Number of pages52
JournalJournal of Finance
Volume79
Issue number3
DOIs
Publication statusPublished - Jun 2024
MoE publication typeA1 Journal article-refereed

Fingerprint

Dive into the research topics of 'Nonstandard Errors'. Together they form a unique fingerprint.

Cite this