's Avatar

@renebekkers.bsky.social

17 Followers  |  31 Following  |  10 Posts  |  Joined: 20.02.2025  |  1.9279

Latest posts by renebekkers.bsky.social on Bluesky

LnuOpen | Meta-Psychology

My article "Data is not available upon request" was published in Meta-Psychology. Very happy to see this out!
open.lnu.se/index.php/me...

04.10.2025 12:54 โ€” ๐Ÿ‘ 96    ๐Ÿ” 34    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 8
Preview
The threat of analytic flexibility in using large language models to simulate human data: A call to attention Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...

Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD ๐Ÿงต

18.09.2025 07:56 โ€” ๐Ÿ‘ 326    ๐Ÿ” 149    ๐Ÿ’ฌ 12    ๐Ÿ“Œ 58
A Modular Approach to Researchย Quality A dashboard of transparency indicators signaling trustworthiness Our Research Transparency Check (Bekkers et al., 2025) rests on two pillars. The first pillar is the development of Papercheck (DeBruine & Lakens, 2025), a collection of software applications that assess the transparency and methodological quality of research that we blogged about earlier (Lakens, 2025). Our approach is modular: for each defined aspect of transparency and methodological quality we develop a dedicated module and integrate it in the…

How A Research Transparency Check Facilitatates Responsible Assessment of Research Quality

13.09.2025 07:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
The social sciences face a replicability crisis. A key determinant of replication success is statistical power. We assess the
power of political science research by collating over 16,000 hypothesis tests from about 2,000 articles in 46 areas of the
discipline. Under generous assumptions, we show that quantitative research in political science is greatly underpow-
ered: the median analysis has about 10% power, and only about 1 in 10 tests have at least 80% power to detect the
consensus effects reported in the literature. We also find substantial heterogeneity in tests across research areas, with
some being characterized by high power but most having very low power. To contextualize our findings, we survey
political methodologists to assess their expectations about power levels. Most methodologists greatly overestimate the
statistical power of political science research.

The social sciences face a replicability crisis. A key determinant of replication success is statistical power. We assess the power of political science research by collating over 16,000 hypothesis tests from about 2,000 articles in 46 areas of the discipline. Under generous assumptions, we show that quantitative research in political science is greatly underpow- ered: the median analysis has about 10% power, and only about 1 in 10 tests have at least 80% power to detect the consensus effects reported in the literature. We also find substantial heterogeneity in tests across research areas, with some being characterized by high power but most having very low power. To contextualize our findings, we survey political methodologists to assess their expectations about power levels. Most methodologists greatly overestimate the statistical power of political science research.

The pretty draft is now online.

Link to paper (free): www.journals.uchicago.edu/doi/epdf/10....

Our replication package starts from the raw data and we put real work into making it readable & setting it up so people could poke at it, so please do explore it: dataverse.harvard.edu/dataset.xhtm...

10.09.2025 17:25 โ€” ๐Ÿ‘ 106    ๐Ÿ” 29    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 6
Post image

At the FORRT Replication Hub, our mission is to support researchers who want to replicate previous findings. We have now published a big new component with which we want to fulfill this mission: An open access handbook for reproduction and replication studies: forrt.org/replication_...

03.09.2025 06:54 โ€” ๐Ÿ‘ 59    ๐Ÿ” 35    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Post image

We're bringing together members of the VU community, including research support staff and faculty, who will share their experiences and lessons learned from implementing a campus-wide Open Science program.

๐‘๐ž๐ ๐ข๐ฌ๐ญ๐ž๐ซ: cos-io.zoom.us/webin...
๐ƒ๐š๐ญ๐ž: Sep 10, 2025
๐“๐ข๐ฆ๐ž: 9:00 AM ET

26.08.2025 18:31 โ€” ๐Ÿ‘ 6    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Meta Research Symposium 2025 PMGS PMGS Meta Research Symposium 2025 16-17 October 2025, TU/e Eindhoven Conference website: https://paulmeehlschool.github.io/workshops/ Program Day 1 - Pre-Symposium Mini-Workshop Time Activityโ€ฆ

The full program for the PMGS Meta Research Symposium 2025 is online: docs.google.com/document/d/1... If you are interested in causal inference, systematic review, hypothesis testing, and preregistration, join is October 17th in Eindhoven! Attendance is free!

20.08.2025 14:34 โ€” ๐Ÿ‘ 21    ๐Ÿ” 13    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 3
Post image

Coding errors in data processing are more likely to be ignored if the erroneous result is in line with what we want to see. The theoretical prediction made in this paper is very plausible - testing it empirically is perhaps a bit more challenging. But still, interesting. arxiv.org/pdf/2508.20069

30.08.2025 09:54 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Thanks for mentioning it! Indeed it's a teaching tool. Each entry is a risk factor, but some are worse than others and their weights will vary from case to case. The website gives some meta science insights for each issue, and recommends a fix

16.08.2025 13:33 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A 25 year old successfulย replication While reviewing a paper, I suddenly remembered the first replication of an experimental study I ever conducted. It's 25 years old. In March 2000, I taught a workshop for a sociology class of 25 undergraduate students at Utrecht University. I asked the students in my group to fill out a simplified version of a study on the norm of self-interest; Study 4 in the Miller & Ratner (1998) paper in JPSP on the norm of self-interest.

A 25 year old successful replication of a well-known finding in social psychology about the overestimation of self-interest in attitudes.

15.08.2025 11:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

A good systematic review/meta analysis will evaluate the quality of each study and not just review the conclusions/data.

If the methodology doesn't include evaluation of quality, it's not a good source. Reviewing methodology is part of the vetting process.

24.07.2025 11:14 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Job opportunities at Retraction Watch Here are our current open positions: Editor, Medical Evidence Project Staff reporter, Retraction Watch Assistant researcher, Retraction Watch Database Learn more about the Center for Scientific Intโ€ฆ

The Center For Scientific Integrity, our parent nonprofit, is hiring! Two new positions:

-- Editor, Medical Evidence Project
-- Staff reporter, Retraction Watch

and we're still recruiting for:

-- Assistant researcher, Retraction Watch Database

29.07.2025 16:11 โ€” ๐Ÿ‘ 27    ๐Ÿ” 26    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3

#ai #llm #integrity #reliability #transparency

02.07.2025 06:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
The High Five: A Checklist for the Evaluation of Knowledgeย Claims As the wave of LLM generated research swells, how can you tell whether it is legit? A fast growing proportion of science contains results generated by Large Language Models (LLMs) and other forms of generative AI. Researchers rely on virtual assistants to do their literature reviews, summarize previous research, and write texts for journal articles (Kwon, 2025). As a result, AI generated research swells to immense proportions.

As the wave of LLM generated research swells, how can you tell whether it is legit? Introducing the high five: a checklist for the evaluation of knowledge claims by LLMs, other generative AI, and science in general

01.07.2025 13:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Do business and economics studies erode prosocial values? Does exposure to business and economics education make students less prosocial and more selfish? Employing a difference-in-difference strategy with panel-data from three subsequent cohorts of student...

Very important evidence from Sweden showing that #business and #economics become less #prosocial during their studies, while #law students do not onlinelibrary.wiley.com/doi/full/10....

23.02.2025 15:12 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Yes!

20.02.2025 09:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

We are looking for RAs to work with us on a new meta-science & replication project, joint with @i4replication.bsky.social, now with focus on environmental (economics) topics such as air pollution & carbon pricing. RAs can work remotely but must be located in Germany www.rwi-essen.de/fileadmin/us...

30.01.2025 10:54 โ€” ๐Ÿ‘ 10    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@ap.brid.gy @renebekkers.mastodon.social.ap.brid.gy

20.02.2025 07:39 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@ap.brid.gy @renebekkers@mastodon.social

20.02.2025 07:37 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@ap.brid.gy

20.02.2025 07:36 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@renebekkers is following 20 prominent accounts