Also, this should not be a reason to stop exercising.
1) There are other benefits of exercise
2) Some populations/exercises show benefit
3) There might be wider effects on cognition; however, the literature is too heterogeneous and contaminated with publication bias to be certain
01.12.2025 16:19 — 👍 13 🔁 0 💬 0 📌 0
I think that the field needs to clean up the published literature a bit. Additional small studies are not going to move the needle at this point; maybe a couple of large-scale, pre-registered studies might provide more insight?
01.12.2025 16:19 — 👍 10 🔁 2 💬 1 📌 0
We also re-analyzed all of the original meta-analyses individually. Many of them are consistent with publication bias: the evidence for and the degree of the pooled effects decrease once publication bias is adjusted for.
01.12.2025 16:19 — 👍 2 🔁 0 💬 1 📌 0
First, we found notable publication bias, especially in studies on general cognition and executive function. Importantly, there was extreme between-study heterogeneity (tau ~ 0.3-0.6!). This means that the results were consistent with both large benefit but also large harm.
01.12.2025 16:19 — 👍 3 🔁 0 💬 1 📌 0
We were not the only ones to notice, also see @matthewbjane.bsky.social commenting on this when the study came out:
x.com/MatthewBJane...
So, we manually extracted the study-level data from the included meta-analyses and re-evaluated the evidence.
01.12.2025 16:19 — 👍 1 🔁 0 💬 1 📌 0
OSF
We just preprinted a huge meta-meta-analysis examining the effects of exercise on cognition, memory, and executive function
In short
- 2239 effect sizes
- extreme between-study heterogeneity
- extensive publication bias
- some subgroup/exercise-specific effects
More below (doi.org/10.31234/osf...)
01.12.2025 16:19 — 👍 61 🔁 30 💬 1 📌 0
We built the openESM database:
▶️60 openly available experience sampling datasets (16K+ participants, 740K+ obs.) in one place
▶️Harmonized (meta-)data, fully open-source software
▶️Filter & search all data, simply download via R/Python
Find out more:
🌐 openesmdata.org
📝 doi.org/10.31234/osf...
22.10.2025 19:34 — 👍 275 🔁 143 💬 14 📌 13
We developed PublicationBiasBenchmark R package (github.com/FBartos/Publ...) that can be easily extended with new methods and measures. It also automatically generates a webpage with summary reports (fbartos.github.io/PublicationB...). All the raw data, results, and measures are available on OSF.
23.10.2025 16:03 — 👍 2 🔁 0 💬 0 📌 0
Our proposal addresses other issues of current simulation studies (incomparability, irreproducibility...).
We demonstrate the living synthetic benchmark methodology on the publication bias adjustment literature. See how previous simulations use different methods and measures.
23.10.2025 16:03 — 👍 2 🔁 0 💬 1 📌 0
To start the process, we suggest
- collecting all published methods and simulations
- evaluating all methods on all simulations
- publishing this set of results as the initial synthetic benchmark
- later research can update this benchmark with new methods and simulations
23.10.2025 16:03 — 👍 1 🔁 0 💬 1 📌 0
We want to separate those two steps.
New simulations should be published without new methods. Instead, they should evaluate all existing methods.
New methods should be published without new simulations. Instead, they should be assessed on all existing simulations.
23.10.2025 16:03 — 👍 2 🔁 0 💬 1 📌 0
Simulation studies have a conflict of interest problem. The same team:
- develops a new method
- designs a simulation study to evaluate it
However, the new method has to show good performance to get published.
We propose living synthetic benchmarks to address the issue (doi.org/10.48550/arX...).
23.10.2025 16:03 — 👍 19 🔁 10 💬 2 📌 0
We are pleased to have
@fbartos.bsky.social
join us today, Tuesday, September 30th, 11am (EST) to talk about Bayesian Hypothesis testing! This is followed by a workshop on using JASP for statistics around 12:10pm. The zoom is open to public with details in the flyer!
@PsychPrinceton
30.09.2025 11:43 — 👍 2 🔁 1 💬 0 📌 0
Simonsohn has now posted a blog response to our recent paper about the poor statistical properties of the P curve. @clintin.bsky.social and I are finishing up a less-technical paper that will serve as a response. But I wanted to address a meta-issue *around* this that may clarify some things. 1/x
25.09.2025 10:07 — 👍 76 🔁 30 💬 2 📌 8
> Why are you actively misrepresenting what others are saying all the time?
I'm happy to discuss with you in person if we meet anywhere, but I don't find replying to you online very productive at this point.
24.09.2025 13:37 — 👍 0 🔁 0 💬 1 📌 0
> Carter et al are right, and you are wrong
That's pretty much just arguing from authority
24.09.2025 13:35 — 👍 1 🔁 0 💬 1 📌 0
I did not say meta-analyses with huge heterogeneity lol. I said under any heterogeneity. Would you consider tau = 0.1-0.2 on Cohen's d scale with an average effect size of 0.2-0.4 huge? I would not. Pretty meaningful result (and probably representative of many meta-analyses), but p-curve fails.
24.09.2025 13:33 — 👍 0 🔁 0 💬 1 📌 0
> P-curve does what worse than random effects?
All the simulations I linked shows that p-curve estimates the effect size worse, on averate, than random effects.
24.09.2025 13:31 — 👍 0 🔁 0 💬 1 📌 0
Must've been a bug on the platform -- I could not see any responses I sent to the thread but other features worked fine.
24.09.2025 13:29 — 👍 0 🔁 0 💬 1 📌 0
For some reason, I cannot reply to Lakens anymore?
Regardless, if anyone is interested in the topic:
- Carter does not say something completely opposite to my claims
- I^2 is not a measure of absolute heterogeneity, Laken's argument strawmans meta-analysis
- p-curve does worse than random effects
24.09.2025 11:16 — 👍 0 🔁 0 💬 6 📌 0
It's not completely opposed - they say that they work well only under no heterogeneity. From their and other simulation studies it seems like that a simple random effects model performs better than p-curve even when publication bias is present. As such, I don't see any reason for using the method.
24.09.2025 11:12 — 👍 0 🔁 0 💬 1 📌 0
How is it directly opposite to what I'm saying?
Also, glad we got to the late-stage science when you start pulling arguments of authority. Always great debating with you :)
24.09.2025 09:39 — 👍 0 🔁 0 💬 1 📌 0
Postdoctoral fellow at METRICS @stanford.edu
Interested in everything related to meta-research and evidence synthesis
https://sandovallentisco.github.io/
Researchers, practitioners, & open science advocates building a better system for research evaluation. Nonprofit. We commission public evaluation & rating of hosted work. To make rigorous research more impactful, & impactful research more rigorous.
Lecturer and Researcher at Department of Psychology, Princeton University.
Your friendly shitposting scientist, here to disappoint your biases. It's never personal. Topics: ITsec, psych, engineering, data science, coding, politics. German/English/Japanese/Dutch (in progress).
https://Roguehci.home.blog
Statistics, cognitive modelling, and other sundry things. Mastodon: @richarddmorey@tech.lgbt
[I deleted my twitter account]
MetaScience, MetaScientist, MetaPsycholog, UberScientist
Assistant Professor at UT-Austin.
Interested in meta-analysis, selective reporting and publication bias, single-case design, open science, and R.
Juan de la Cierva postdoctoral fellow at @uam.es | Psychological, meta- and contemplative research | INMINDS interview series | lcasedas.com
Systematic Reviews and Meta-Analysis. Researcher at The Danish Center for Social Science Research.
Author behind the AIscreenR package to screen titles and abstracts with GPT models. See https://osf.io/preprints/osf/yrhzm
Psychologist - Methodologist - Meta-Analyist. Assistant Professor at UNED (Spain)
Statistician, meta-analyst, “local mom” to my former students
Johns Hopkins BA/MA 1978
U of Chicago PhD 1985
Research Fellow, Adj. Associate Professor, Editor | Review of Education
#EvidenceSynthesis #AIEd #EdTech #StudentEngagement
Physicist Turned Psychologist | Senior Researcher in #STEMed | Meta-Analysis Nerd | https://d-miller.github.io/
Also posts about 🧪 science funding to focus my attention.
Personal account. I don’t speak for my employer or any other orgs.
Associate Professor in Biostatistics, Oslo Center for Biostatistics and Epidemiology @ocbe.bsky.social, University of Oslo | https://www.cer-methods.com/
Social Science Librarian in Amsterdam. Libraries, systematic reviews, knitting, sewing, photography
Campbell promotes evidence-based policy and practice. We publish social science evidence synthesis #research in our #OpenAccess journal. #SystematicReviews
#UCL #EPPI-Centre researcher
https://profiles.ucl.ac.uk/33741
Researching evidence informed policy-making; evidence synthesis; social exclusion; public health; social gerontology; LGBTQ health.
Cymro, E17 taffia, Tiger keeper, He/Him
Based at University College London, we aim to produce, support, and promote the use of collaborative, rigorous evidence for a more just and equitable world
🔗 https://eppi.ioe.ac.uk/cms/
🌍 London, UK
News and updates for EPPI Reviewer, software for systematic reviews, literature reviews and meta-analysis.