Crowdsourcing Hypothesis Tests: Making Transparent How Design Choices Shape Research Results
- Publication type:
- Journal article
- Metadata:
-
- Autoren
- Justin F Landy
- Miaolei Liam Jia
- Isabel L Ding
- Domenico Viganola
- Warren Tierney
- Anna Dreber
- Magnus Johannesson
- Thomas Pfeiffer
- Charles R Ebersole
- Quentin F Gronau
- Alexander Ly
- Don van den Bergh
- Maarten Marsman
- Koen Derks
- Eric-Jan Wagenmakers
- Andrew Proctor
- Daniel M Bartels
- Christopher W Bauman
- William J Brady
- Felix Cheung
- Andrei Cimpian
- Simone Dohle
- M Brent Donnellan
- Adam Hahn
- Michael P Hall
- William Jimenez-Leal
- David J Johnson
- Richard E Lucas
- Benoit Monin
- Andres Montealegre
- Elizabeth Mullen
- Jun Pang
- Jennifer Ray
- Diego A Reinero
- Jesse Reynolds
- Walter Sowden
- Daniel Storage
- Runkun Su
- Christina M Tworek
- Jay J Van Bavel
- Daniel Walco
- Julian Wills
- Xiaobing Xu
- Kai Chi Yam
- Xiaoyu Yang
- William A Cunningham
- Martin Schweinsberg
- Molly Urwitz
- Eric L Uhlmann
- Autoren-URL
- https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=fis-test-1&SrcAuth=WosAPI&KeyUT=WOS:000526064600003&DestLinkType=FullRecord&DestApp=WOS_CPL
- DOI
- 10.1037/bul0000220
- eISSN
- 1939-1455
- Externe Identifier
- Clarivate Analytics Document Solution ID: LD5IZ
- PubMed Identifier: 31944796
- ISSN
- 0033-2909
- Ausgabe der Veröffentlichung
- 5
- Zeitschrift
- PSYCHOLOGICAL BULLETIN
- Schlüsselwörter
- conceptual replications
- crowdsourcing
- forecasting
- research robustness
- scientific transparency
- Paginierung
- 451 - 479
- Datum der Veröffentlichung
- 2020
- Status
- Published
- Titel
- Crowdsourcing Hypothesis Tests: Making Transparent How Design Choices Shape Research Results
- Sub types
- Article
- Ausgabe der Zeitschrift
- 146
Data source: Web of Science (Lite)
- Other metadata sources:
-
- Autoren
- Justin F Landy
- Miaolei Liam Jia
- Isabel L Ding
- Domenico Viganola
- Warren Tierney
- Anna Dreber
- Magnus Johannesson
- Thomas Pfeiffer
- Charles R Ebersole
- Quentin F Gronau
- Alexander Ly
- Don van den Bergh
- Maarten Marsman
- Koen Derks
- Eric-Jan Wagenmakers
- Andrew Proctor
- Daniel M Bartels
- Christopher W Bauman
- William J Brady
- Felix Cheung
- Andrei Cimpian
- Simone Dohle
- M Brent Donnellan
- Adam Hahn
- Michael P Hall
- William Jiménez-Leal
- David J Johnson
- Richard E Lucas
- Benoît Monin
- Andres Montealegre
- Elizabeth Mullen
- Jun Pang
- Jennifer Ray
- Diego A Reinero
- Jesse Reynolds
- Walter Sowden
- Daniel Storage
- Runkun Su
- Christina M Tworek
- Jay J Van Bavel
- Daniel Walco
- Julian Wills
- Xiaobing Xu
- Kai Chi Yam
- Xiaoyu Yang
- William A Cunningham
- Martin Schweinsberg
- Molly Urwitz
- The Crowdsourcing Hypothesis Tests Collaboration
- Eric L Uhlmann
- DOI
- 10.1037/bul0000220
- eISSN
- 1939-1455
- ISSN
- 0033-2909
- Ausgabe der Veröffentlichung
- 5
- Zeitschrift
- Psychological Bulletin
- Sprache
- en
- Online publication date
- 2020
- Paginierung
- 451 - 479
- Status
- Published online
- Herausgeber
- American Psychological Association (APA)
- Herausgeber URL
- http://dx.doi.org/10.1037/bul0000220
- Datum der Datenerfassung
- 2024
- Titel
- Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.
- Ausgabe der Zeitschrift
- 146
Data source: Crossref
- Abstract
- To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
- Addresses
- Department of Psychology and Neuroscience, Nova Southeastern University.
- Autoren
- Justin F Landy
- Miaolei Liam Jia
- Isabel L Ding
- Domenico Viganola
- Warren Tierney
- Anna Dreber
- Magnus Johannesson
- Thomas Pfeiffer
- Charles R Ebersole
- Quentin F Gronau
- Alexander Ly
- Don van den Bergh
- Maarten Marsman
- Koen Derks
- Eric-Jan Wagenmakers
- Andrew Proctor
- Daniel M Bartels
- Christopher W Bauman
- William J Brady
- Felix Cheung
- Andrei Cimpian
- Simone Dohle
- M Brent Donnellan
- Adam Hahn
- Michael P Hall
- William Jiménez-Leal
- David J Johnson
- Richard E Lucas
- Benoît Monin
- Andres Montealegre
- Elizabeth Mullen
- Jun Pang
- Jennifer Ray
- Diego A Reinero
- Jesse Reynolds
- Walter Sowden
- Daniel Storage
- Runkun Su
- Christina M Tworek
- Jay J Van Bavel
- Daniel Walco
- Julian Wills
- Xiaobing Xu
- Kai Chi Yam
- Xiaoyu Yang
- William A Cunningham
- Martin Schweinsberg
- Molly Urwitz
- The Crowdsourcing Hypothesis Tests Collaboration
- Eric L Uhlmann
- DOI
- 10.1037/bul0000220
- eISSN
- 1939-1455
- Externe Identifier
- PubMed Identifier: 31944796
- Funding acknowledgements
- Jan Wallander and Tom Hedelius Foundation:
- Austrian Science Fund FWF: FWF, SFB F63
- Austrian Science Fund FWF:
- Knut and Alice Wallenbergs Foundation:
- Marsden Fund: 16-UOA-190; 17-MAU-133
- INSEAD:
- Swedish Foundation for Humanities and Social Sciences:
- Open access
- false
- ISSN
- 0033-2909
- Ausgabe der Veröffentlichung
- 5
- Zeitschrift
- Psychological bulletin
- Schlüsselwörter
- Humans
- Random Allocation
- Psychology
- Research Design
- Adult
- Crowdsourcing
- Sprache
- eng
- Medium
- Print-Electronic
- Online publication date
- 2020
- Paginierung
- 451 - 479
- Datum der Veröffentlichung
- 2020
- Status
- Published
- Datum der Datenerfassung
- 2020
- Titel
- Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.
- Sub types
- Research Support, Non-U.S. Gov't
- Journal Article
- Ausgabe der Zeitschrift
- 146
Data source: Europe PubMed Central
- Abstract
- To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
- Autoren
- Justin F Landy
- Miaolei Liam Jia
- Isabel L Ding
- Domenico Viganola
- Warren Tierney
- Anna Dreber
- Magnus Johannesson
- Thomas Pfeiffer
- Charles R Ebersole
- Quentin F Gronau
- Alexander Ly
- Don van den Bergh
- Maarten Marsman
- Koen Derks
- Eric-Jan Wagenmakers
- Andrew Proctor
- Daniel M Bartels
- Christopher W Bauman
- William J Brady
- Felix Cheung
- Andrei Cimpian
- Simone Dohle
- M Brent Donnellan
- Adam Hahn
- Michael P Hall
- William Jiménez-Leal
- David J Johnson
- Richard E Lucas
- Benoît Monin
- Andres Montealegre
- Elizabeth Mullen
- Jun Pang
- Jennifer Ray
- Diego A Reinero
- Jesse Reynolds
- Walter Sowden
- Daniel Storage
- Runkun Su
- Christina M Tworek
- Jay J Van Bavel
- Daniel Walco
- Julian Wills
- Xiaobing Xu
- Kai Chi Yam
- Xiaoyu Yang
- William A Cunningham
- Martin Schweinsberg
- Molly Urwitz
- The Crowdsourcing Hypothesis Tests Collaboration
- Eric L Uhlmann
- Autoren-URL
- https://www.ncbi.nlm.nih.gov/pubmed/31944796
- DOI
- 10.1037/bul0000220
- eISSN
- 1939-1455
- Funding acknowledgements
- Austrian Science Fund FWF:
- Ausgabe der Veröffentlichung
- 5
- Zeitschrift
- Psychol Bull
- Schlüsselwörter
- Adult
- Crowdsourcing
- Humans
- Psychology
- Random Allocation
- Research Design
- Sprache
- eng
- Country
- United States
- Paginierung
- 451 - 479
- PII
- 2020-02973-001
- Datum der Veröffentlichung
- 2020
- Status
- Published
- Datum, an dem der Datensatz öffentlich gemacht wurde
- 2020
- Titel
- Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.
- Sub types
- Journal Article
- Research Support, Non-U.S. Gov't
- Ausgabe der Zeitschrift
- 146
Data source: PubMed
- Beziehungen:
- Property of