Despite Prepublication Peer Review being a core part of science, training materials for it are sparse. This open source guide is very valuable for ECRs!
ecr-reviewers.gitlab.io/guide/
@asandovall.bsky.social
Postdoctoral fellow at METRICS @stanford.edu Interested in everything related to meta-research and evidence synthesis https://sandovallentisco.github.io/
Despite Prepublication Peer Review being a core part of science, training materials for it are sparse. This open source guide is very valuable for ECRs!
ecr-reviewers.gitlab.io/guide/
Search for papers on reproducibility, replicability, or robustness and youโll find plenty of resultsโand plenty of inconsistency in how those terms are used.
A new COS blog by Brian Nosek shares SCOREโs working definitions and links to a short preprint with more detail.
www.cos.io/blog/increas...
New preprint ๐ Living systematic reviews ensure evidence stays current osf.io/preprints/ps...
In this brief comment (all four pages are here โฌ๏ธ), @iaiversen.bsky.social and I cover the benefits and challenges of living systematic reviews, along with two ways to increase their uptake
๐งต 1/8
In case you have missed Simine Vazire's excellent webinar yesterday, here is the link to watch it online: youtu.be/_vb1CNwC3CM Thanks again @simine.com for staying up so late and thanks to the audience for the great questions!
02.12.2025 10:17 โ ๐ 48 ๐ 30 ๐ฌ 1 ๐ 5Asking informally: does anyone know someone who might be interested in a postdoc focused on understanding changes in memory representations driven by attention using EEG? โก๏ธThanks!
01.12.2025 05:03 โ ๐ 15 ๐ 25 ๐ฌ 0 ๐ 0METRICS is accepting applications for the 2026โ27 postdoctoral fellowship in meta-research at Stanford. Deadline: Feb 15, 2026. Start date will be around Oct 1, 2026 (+/- 2 month flexibility). See: metrics.stanford.edu/postdoctoral... #MetaResearch #postdoc
26.11.2025 22:05 โ ๐ 9 ๐ 15 ๐ฌ 0 ๐ 1Job opportunity โ Junior Professorship in Psychological Metascience @zpid.bsky.social leibniz-psychology.onlyfy.jobs/job/10kku5n7 h/t @bethclarke.bsky.social
26.11.2025 03:10 โ ๐ 23 ๐ 23 ๐ฌ 1 ๐ 2๐ข Weโre thrilled to announce the launch of our new โResearch Fellowsโ program! ๐
If youโve contributed to a Replication Game, published in our Discussion Papers series, or helped organize one of our events โ youโre invited to join the roster of I4R Research Fellows.
Turning abstract policy into real-world practice is challenging, and sometimes messy. In the coming months, COS will highlight how we and other in the research community are putting the TOP Guidelines into action in real research settings.
๐ก Learn more in our blog post:
MDPI es una desgracia...
direct.mit.edu/qss/article/...
This paper provides guidance and tools for conducting open and reproducible systematic reviews in psychology. It emphasizes the importance of systematic reviews for evidence-based decision-making and the growing adoption of open science practices. Open science enhances transparency, reproducibility, and minimizes bias in systematic reviews by sharing data, materials, and code. It also fosters collaborations and enables involvement of non-academic stakeholders. The paper is designed for beginners, offering accessible guidance to navigate the many standards and resources that may not obviously align with specific areas of psychology. It covers systematic review conduct standards, pre-registration, registered reports, reporting standards, and open data, materials and code. The paper is concluded with a glimpse of recent innovations like Community Augmented Meta-Analysis and independent reproducibility checks.
There is no reason why systematic reviews can't be open. The data used for synthesis is *already* open and there are many excellent open source tools that can facilitate the easy sharing of analysis scripts.
Here's a nice guide for performing open systematic reviews doi.org/10.1525/coll...
๐๏ธ Save the date: The next BITSS Annual Meeting is April 16, 2026! The conference will gather experts to discuss changes in academic publishing, AI, and current challenges to research transparency.
@cega-uc.bsky.social @tedmiguel.bsky.social
Just released my short course on Bayesian Data Analysis using JASP. Hope you find it useful!
๐Materials, scripts, data and links: doi.org/10.17605/OSF...
๐บ Playlist: www.youtube.com/playlist?lis...
#StatsEd #Bayes #OpenScience #Statistics #Teaching
@rosenetwork.bsky.social
Hola! El prรณximo lunes 24 de noviembre a las 16h tenemos una nueva reuniรณn๐ฅณ. Discutiremos acerca del sistema de publicaciones con el artรญculo de Hanson et al. (2024) - The strain on scientific publishing (doi.org/10.1162/qss_...). Nos vemos!
20.11.2025 00:06 โ ๐ 3 ๐ 3 ๐ฌ 0 ๐ 0@METRICStanford is accepting applications for ๐๐๐๐-๐๐ ๐ฉ๐จ๐ฌ๐ญ๐๐จ๐๐ญ๐จ๐ซ๐๐ฅ ๐๐๐ฅ๐ฅ๐จ๐ฐ๐ฌ๐ก๐ข๐ฉ ๐ข๐ง ๐ฆ๐๐ญ๐-๐ซ๐๐ฌ๐๐๐ซ๐๐ก: metrics.stanford.edu/postdoctoral.... The deadline for applications is February 15, 2026, the position is expected to start at October 1, 2026.
19.11.2025 17:49 โ ๐ 1 ๐ 2 ๐ฌ 0 ๐ 0Tomorrow 9am PT time: ๐๐จ๐ฐ ๐๐๐ฉ๐ซ๐จ๐๐ฎ๐๐ข๐๐ฅ๐ ๐ข๐ฌ ๐๐๐๐ฅ๐ญ๐ก ๐๐๐ข๐๐ง๐๐ ๐๐๐ฌ๐๐๐ซ๐๐ก? by Niklas Bobrovitz and Stephana Moss, cover 38 metrics of reproducibility and 180 prevalence estimates. Registration at: stanford.zoom.us/meeting/regi...
17.11.2025 18:58 โ ๐ 4 ๐ 3 ๐ฌ 0 ๐ 0Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.
Our paper on improving statistical reporting in psychology is now online ๐
As a part of this paper, we also created the Transparent Statistical Reporting in Psychology checklist, which researchers can use to improve their statistical reporting practices
www.nature.com/articles/s44...
So @jamesheathers.bsky.social & I answer the burning question: does Cake cause Herpes? No, but one can torture the data to give that impression, and that's a problem. Promiscuous dichotomisation in biomedical science hugely increases spurious findings, best avoided
link.springer.com/article/10.1...
"Belief in the law of small numbers" as a way to understand the continuing appeal of junk science
statmodeling.stat.columbia.edu/2025/11/12/b...
What is the most profitable industry in the world, this side of the law? Not oil, not IT, not pharma.
It's *scientific publishing*.
We call this the Drain of Scientific Publishing.
Paper: arxiv.org/abs/2511.04820
Background: doi.org/10.1162/qss_...
Thread @markhanson.fediscience.org.ap.brid.gy ๐
A table showing profit margins of major publishers. A snippet of text related to this table is below. 1. The four-fold drain 1.1 Money Currently, academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis, which collectively generated over US$7.1 billion in revenue from journal publishing in 2024 alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit margins have always been over 30% in the last five years, and for the largest publisher (Elsevier) always over 37%. Against many comparators, across many sectors, scientific publishing is one of the most consistently profitable industries (Table S1). These financial arrangements make a substantial difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor & Francis revenues were generated in North America, meaning that North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year.
A figure detailing the drain on researcher time. 1. The four-fold drain 1.2 Time The number of papers published each year is growing faster than the scientific workforce, with the number of papers per researcher almost doubling between 1996 and 2022 (Figure 1A). This reflects the fact that publishersโ commercial desire to publish (sell) more material has aligned well with the competitive prestige culture in which publications help secure jobs, grants, promotions, and awards. To the extent that this growth is driven by a pressure for profit, rather than scholarly imperatives, it distorts the way researchers spend their time. The publishing system depends on unpaid reviewer labour, estimated to be over 130 million unpaid hours annually in 2020 alone (9). Researchers have complained about the demands of peer-review for decades, but the scale of the problem is now worse, with editors reporting widespread difficulties recruiting reviewers. The growth in publications involves not only the authorsโ time, but that of academic editors and reviewers who are dealing with so many review demands. Even more seriously, the imperative to produce ever more articles reshapes the nature of scientific inquiry. Evidence across multiple fields shows that more papers result in โossificationโ, not new ideas (10). It may seem paradoxical that more papers can slow progress until one considers how it affects researchersโ time. While rewards remain tied to volume, prestige, and impact of publications, researchers will be nudged away from riskier, local, interdisciplinary, and long-term work. The result is a treadmill of constant activity with limited progress whereas core scholarly practices โ such as reading, reflecting and engaging with othersโ contributions โ is de-prioritized. What looks like productivity often masks intellectual exhaustion built on a demoralizing, narrowing scientific vision.
A table of profit margins across industries. The section of text related to this table is below: 1. The four-fold drain 1.1 Money Currently, academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis, which collectively generated over US$7.1 billion in revenue from journal publishing in 2024 alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit margins have always been over 30% in the last five years, and for the largest publisher (Elsevier) always over 37%. Against many comparators, across many sectors, scientific publishing is one of the most consistently profitable industries (Table S1). These financial arrangements make a substantial difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor & Francis revenues were generated in North America, meaning that North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year.
The costs of inaction are plain: wasted public funds, lost researcher time, compromised scientific integrity and eroded public trust. Today, the system rewards commercial publishers first, and science second. Without bold action from the funders we risk continuing to pour resources into a system that prioritizes profit over the advancement of scientific knowledge.
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:
a ๐งต 1/n
Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...
Poniendo los puntos sobre las รญes ๐
Beigel, F., Brockington, D., Crosetto, P., Derrick, G., Fyfe, A., Barreiro, P. G., Hanson, M. A., Haustein, S., Lariviรจre, V., Noe, C., Pinfield, S., & Wilsdon, J. (2025). The Drain of Scientific Publishing (No. arXiv:2511.04820). arXiv. doi.org/10.48550/arX...
Have increased capacity for this December INSPECT-SR online training workshop following a successful 1st event today. Book here: www.trybooking.com/uk/FKHV
06.11.2025 17:55 โ ๐ 12 ๐ 7 ๐ฌ 0 ๐ 2A study I wrote to the journal about in May this year was just retracted: www.sciencedirect.com/science/arti...
The review paper claimed that the majority of benefits in clinical trials could be explained by placebo effects.
๐ฃ Save the date for the 13th PCI webinar on December 1st, 2025, at 4 PM CET!! Simine Vazire (University of Melbourne, Australia) will present "Recognizing and responding to a replication crisis: Lessons from Psychology". For more details and registration, visit: buff.ly/wZNoD2v
05.11.2025 10:49 โ ๐ 25 ๐ 21 ๐ฌ 1 ๐ 3Fun fact: it has been 56 days since I notified the editors of Neurology about glaring statistical errors in this peer-reviewed study on sweeteners and cognitive health.
No expression of concern, no correction, no retraction.
@bmj.com Please look at PubPeer comments on an article you published last week. pubpeer.com/publications...
I think your research integrity dept shld act swiftly on this one, given clinical significance.
I'm aware of even more evidence of problems so let me know if this is not sufficient.
The package formerly known as papercheck has changed its name to metacheck! We're checking more than just papers, with functions to assess OSF projects, github repos, and AsPredicted pre-registrations, with more being developed all the time.
scienceverse.github.io/metacheck/
Fatal flaws in "The relationship between personality traits and marital satisfaction: a systematic review and meta-analysis": https://osf.io/t8kzm
01.11.2025 14:12 โ ๐ 2 ๐ 2 ๐ฌ 0 ๐ 0Join us on November 18 - 9 am PT | 12 pm ET | 6 pm CET, for 8th Reproducibility Rounds: ๐๐จ๐ฐ ๐๐๐ฉ๐ซ๐จ๐๐ฎ๐๐ข๐๐ฅ๐ ๐ข๐ฌ ๐๐๐๐ฅ๐ญ๐ก ๐๐๐ข๐๐ง๐๐ ๐๐๐ฌ๐๐๐ซ๐๐ก?. Registration at: stanford.zoom.us/meeting/regi...
31.10.2025 22:56 โ ๐ 3 ๐ 4 ๐ฌ 0 ๐ 0