Alejandro Sandoval-Lentisco's Avatar

Alejandro Sandoval-Lentisco

@asandovall.bsky.social

Postdoctoral fellow at METRICS @stanford.edu Interested in everything related to meta-research and evidence synthesis https://sandovallentisco.github.io/

215 Followers  |  371 Following  |  8 Posts  |  Joined: 02.01.2024  |  2.072

Latest posts by asandovall.bsky.social on Bluesky

ECR Reviewers Platform

Despite Prepublication Peer Review being a core part of science, training materials for it are sparse. This open source guide is very valuable for ECRs!

ecr-reviewers.gitlab.io/guide/

04.12.2025 11:53 โ€” ๐Ÿ‘ 43    ๐Ÿ” 20    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
Preview
Increasing Precision of Terms Related to Reproducibility and Replicability Search OpenAlex for reproducibility studies and you will find many papers. Search for replicability studies. Same. Search for robustness studies. Same. A lot of research has been done on these topics ...

Search for papers on reproducibility, replicability, or robustness and youโ€™ll find plenty of resultsโ€”and plenty of inconsistency in how those terms are used.

A new COS blog by Brian Nosek shares SCOREโ€™s working definitions and links to a short preprint with more detail.

www.cos.io/blog/increas...

03.12.2025 22:33 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image Post image

New preprint ๐ŸŽ‰ Living systematic reviews ensure evidence stays current osf.io/preprints/ps...

In this brief comment (all four pages are here โฌ‡๏ธ), @iaiversen.bsky.social and I cover the benefits and challenges of living systematic reviews, along with two ways to increase their uptake

๐Ÿงต 1/8

01.12.2025 19:13 โ€” ๐Ÿ‘ 52    ๐Ÿ” 16    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 1
PCI Webinar series #13 - Simine Vazire - Recognizing and responding to a replication crisis
PCI Webinar series #13 - Simine Vazire - Recognizing and responding to a replication crisis

In case you have missed Simine Vazire's excellent webinar yesterday, here is the link to watch it online: youtu.be/_vb1CNwC3CM Thanks again @simine.com for staying up so late and thanks to the audience for the great questions!

02.12.2025 10:17 โ€” ๐Ÿ‘ 48    ๐Ÿ” 30    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5

Asking informally: does anyone know someone who might be interested in a postdoc focused on understanding changes in memory representations driven by attention using EEG? โšก๏ธThanks!

01.12.2025 05:03 โ€” ๐Ÿ‘ 15    ๐Ÿ” 25    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Postdoctoral Fellowship Announcement 2026-27

METRICS is accepting applications for the 2026โ€“27 postdoctoral fellowship in meta-research at Stanford. Deadline: Feb 15, 2026. Start date will be around Oct 1, 2026 (+/- 2 month flexibility). See: metrics.stanford.edu/postdoctoral... #MetaResearch #postdoc

26.11.2025 22:05 โ€” ๐Ÿ‘ 9    ๐Ÿ” 15    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Job opportunity โ€” Junior Professorship in Psychological Metascience @zpid.bsky.social leibniz-psychology.onlyfy.jobs/job/10kku5n7 h/t @bethclarke.bsky.social

26.11.2025 03:10 โ€” ๐Ÿ‘ 23    ๐Ÿ” 23    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

๐Ÿ“ข Weโ€™re thrilled to announce the launch of our new โ€œResearch Fellowsโ€ program! ๐ŸŽ‰

If youโ€™ve contributed to a Replication Game, published in our Discussion Papers series, or helped organize one of our events โ€” youโ€™re invited to join the roster of I4R Research Fellows.

25.11.2025 14:10 โ€” ๐Ÿ‘ 12    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
From Policy to Practice: COSโ€™s Commitment to Applying the Transparency and Openness Promotion (TOP) Guidelines We recognize that policy efforts on their own can sometimes seem abstract โ€” and even idealistic โ€” to translate into real-world adoption. Over the next six months, weโ€™ll be giving you a view into the nuanced and at times messy process of transforming policy into meaningful action.

Turning abstract policy into real-world practice is challenging, and sometimes messy. In the coming months, COS will highlight how we and other in the research community are putting the TOP Guidelines into action in real research settings.

๐Ÿ’ก Learn more in our blog post:

20.11.2025 14:45 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Post image Post image Post image

MDPI es una desgracia...
direct.mit.edu/qss/article/...

20.11.2025 10:43 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
This paper provides guidance and tools for conducting open and reproducible systematic reviews in psychology. It emphasizes the importance of systematic reviews for evidence-based decision-making and the growing adoption of open science practices. Open science enhances transparency, reproducibility, and minimizes bias in systematic reviews by sharing data, materials, and code. It also fosters collaborations and enables involvement of non-academic stakeholders. The paper is designed for beginners, offering accessible guidance to navigate the many standards and resources that may not obviously align with specific areas of psychology. It covers systematic review conduct standards, pre-registration, registered reports, reporting standards, and open data, materials and code. The paper is concluded with a glimpse of recent innovations like Community Augmented Meta-Analysis and independent reproducibility checks.

This paper provides guidance and tools for conducting open and reproducible systematic reviews in psychology. It emphasizes the importance of systematic reviews for evidence-based decision-making and the growing adoption of open science practices. Open science enhances transparency, reproducibility, and minimizes bias in systematic reviews by sharing data, materials, and code. It also fosters collaborations and enables involvement of non-academic stakeholders. The paper is designed for beginners, offering accessible guidance to navigate the many standards and resources that may not obviously align with specific areas of psychology. It covers systematic review conduct standards, pre-registration, registered reports, reporting standards, and open data, materials and code. The paper is concluded with a glimpse of recent innovations like Community Augmented Meta-Analysis and independent reproducibility checks.

There is no reason why systematic reviews can't be open. The data used for synthesis is *already* open and there are many excellent open source tools that can facilitate the easy sharing of analysis scripts.

Here's a nice guide for performing open systematic reviews doi.org/10.1525/coll...

24.11.2025 12:10 โ€” ๐Ÿ‘ 117    ๐Ÿ” 39    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ—“๏ธ Save the date: The next BITSS Annual Meeting is April 16, 2026! The conference will gather experts to discuss changes in academic publishing, AI, and current challenges to research transparency.

@cega-uc.bsky.social @tedmiguel.bsky.social

21.11.2025 18:24 โ€” ๐Ÿ‘ 8    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Bayesian Data Analysis with JASP: Course for the EAM - YouTube Short course to cover Bayesian Data Analysis. Starts at the foundations of probability and statistical inference, how Bayesian inference is built and how it ...

Just released my short course on Bayesian Data Analysis using JASP. Hope you find it useful!

๐Ÿ“šMaterials, scripts, data and links: doi.org/10.17605/OSF...

๐Ÿ“บ Playlist: www.youtube.com/playlist?lis...

#StatsEd #Bayes #OpenScience #Statistics #Teaching
@rosenetwork.bsky.social

21.11.2025 15:23 โ€” ๐Ÿ‘ 8    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
The strain on scientific publishing Abstract. Scientists are increasingly overwhelmed by the volume of articles being published. The total number of articles indexed in Scopus and Web of Science has grown exponentially in recent years; ...

Hola! El prรณximo lunes 24 de noviembre a las 16h tenemos una nueva reuniรณn๐Ÿฅณ. Discutiremos acerca del sistema de publicaciones con el artรญculo de Hanson et al. (2024) - The strain on scientific publishing (doi.org/10.1162/qss_...). Nos vemos!

20.11.2025 00:06 โ€” ๐Ÿ‘ 3    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

@METRICStanford is accepting applications for ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ”-๐Ÿ๐Ÿ• ๐ฉ๐จ๐ฌ๐ญ๐๐จ๐œ๐ญ๐จ๐ซ๐š๐ฅ ๐Ÿ๐ž๐ฅ๐ฅ๐จ๐ฐ๐ฌ๐ก๐ข๐ฉ ๐ข๐ง ๐ฆ๐ž๐ญ๐š-๐ซ๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก: metrics.stanford.edu/postdoctoral.... The deadline for applications is February 15, 2026, the position is expected to start at October 1, 2026.

19.11.2025 17:49 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Tomorrow 9am PT time: ๐‡๐จ๐ฐ ๐‘๐ž๐ฉ๐ซ๐จ๐๐ฎ๐œ๐ข๐›๐ฅ๐ž ๐ข๐ฌ ๐‡๐ž๐š๐ฅ๐ญ๐ก ๐’๐œ๐ข๐ž๐ง๐œ๐ž ๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก? by Niklas Bobrovitz and Stephana Moss, cover 38 metrics of reproducibility and 180 prevalence estimates. Registration at: stanford.zoom.us/meeting/regi...

17.11.2025 18:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Our paper on improving statistical reporting in psychology is now online ๐ŸŽ‰

As a part of this paper, we also created the Transparent Statistical Reporting in Psychology checklist, which researchers can use to improve their statistical reporting practices

www.nature.com/articles/s44...

14.11.2025 20:43 โ€” ๐Ÿ‘ 232    ๐Ÿ” 91    ๐Ÿ’ฌ 8    ๐Ÿ“Œ 5
Post image Post image

So @jamesheathers.bsky.social & I answer the burning question: does Cake cause Herpes? No, but one can torture the data to give that impression, and that's a problem. Promiscuous dichotomisation in biomedical science hugely increases spurious findings, best avoided
link.springer.com/article/10.1...

13.11.2025 12:42 โ€” ๐Ÿ‘ 17    ๐Ÿ” 9    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1
โ€œBelief in the law of small numbersโ€ as a way to understand the continuing appeal of junk science | Statistical Modeling, Causal Inference, and Social Science

"Belief in the law of small numbers" as a way to understand the continuing appeal of junk science
statmodeling.stat.columbia.edu/2025/11/12/b...

12.11.2025 14:43 โ€” ๐Ÿ‘ 10    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

What is the most profitable industry in the world, this side of the law? Not oil, not IT, not pharma.

It's *scientific publishing*.

We call this the Drain of Scientific Publishing.

Paper: arxiv.org/abs/2511.04820
Background: doi.org/10.1162/qss_...

Thread @markhanson.fediscience.org.ap.brid.gy ๐Ÿ‘‡

12.11.2025 10:31 โ€” ๐Ÿ‘ 332    ๐Ÿ” 239    ๐Ÿ’ฌ 8    ๐Ÿ“Œ 17
A table showing profit margins of major publishers. A snippet of text related to this table is below.

1. The four-fold drain
1.1 Money
Currently, academic publishing is dominated by profit-oriented, multinational companies for
whom scientific knowledge is a commodity to be sold back to the academic community who
created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis,
which collectively generated over US$7.1 billion in revenue from journal publishing in 2024
alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit
margins have always been over 30% in the last five years, and for the largest publisher
(Elsevier) always over 37%.
Against many comparators, across many sectors, scientific publishing is one of the most
consistently profitable industries (Table S1). These financial arrangements make a substantial
difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor &
Francis revenues were generated in North America, meaning that North American
researchers were charged over US$2.27 billion by just two for-profit publishers. The
Canadian research councils and the US National Science Foundation were allocated US$9.3
billion in that year.

A table showing profit margins of major publishers. A snippet of text related to this table is below. 1. The four-fold drain 1.1 Money Currently, academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis, which collectively generated over US$7.1 billion in revenue from journal publishing in 2024 alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit margins have always been over 30% in the last five years, and for the largest publisher (Elsevier) always over 37%. Against many comparators, across many sectors, scientific publishing is one of the most consistently profitable industries (Table S1). These financial arrangements make a substantial difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor & Francis revenues were generated in North America, meaning that North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year.

A figure detailing the drain on researcher time.

1. The four-fold drain

1.2 Time
The number of papers published each year is growing faster than the scientific workforce,
with the number of papers per researcher almost doubling between 1996 and 2022 (Figure
1A). This reflects the fact that publishersโ€™ commercial desire to publish (sell) more material
has aligned well with the competitive prestige culture in which publications help secure jobs,
grants, promotions, and awards. To the extent that this growth is driven by a pressure for
profit, rather than scholarly imperatives, it distorts the way researchers spend their time.
The publishing system depends on unpaid reviewer labour, estimated to be over 130 million
unpaid hours annually in 2020 alone (9). Researchers have complained about the demands of
peer-review for decades, but the scale of the problem is now worse, with editors reporting
widespread difficulties recruiting reviewers. The growth in publications involves not only the
authorsโ€™ time, but that of academic editors and reviewers who are dealing with so many
review demands.
Even more seriously, the imperative to produce ever more articles reshapes the nature of
scientific inquiry. Evidence across multiple fields shows that more papers result in
โ€˜ossificationโ€™, not new ideas (10). It may seem paradoxical that more papers can slow
progress until one considers how it affects researchersโ€™ time. While rewards remain tied to
volume, prestige, and impact of publications, researchers will be nudged away from riskier,
local, interdisciplinary, and long-term work. The result is a treadmill of constant activity with
limited progress whereas core scholarly practices โ€“ such as reading, reflecting and engaging
with othersโ€™ contributions โ€“ is de-prioritized. What looks like productivity often masks
intellectual exhaustion built on a demoralizing, narrowing scientific vision.

A figure detailing the drain on researcher time. 1. The four-fold drain 1.2 Time The number of papers published each year is growing faster than the scientific workforce, with the number of papers per researcher almost doubling between 1996 and 2022 (Figure 1A). This reflects the fact that publishersโ€™ commercial desire to publish (sell) more material has aligned well with the competitive prestige culture in which publications help secure jobs, grants, promotions, and awards. To the extent that this growth is driven by a pressure for profit, rather than scholarly imperatives, it distorts the way researchers spend their time. The publishing system depends on unpaid reviewer labour, estimated to be over 130 million unpaid hours annually in 2020 alone (9). Researchers have complained about the demands of peer-review for decades, but the scale of the problem is now worse, with editors reporting widespread difficulties recruiting reviewers. The growth in publications involves not only the authorsโ€™ time, but that of academic editors and reviewers who are dealing with so many review demands. Even more seriously, the imperative to produce ever more articles reshapes the nature of scientific inquiry. Evidence across multiple fields shows that more papers result in โ€˜ossificationโ€™, not new ideas (10). It may seem paradoxical that more papers can slow progress until one considers how it affects researchersโ€™ time. While rewards remain tied to volume, prestige, and impact of publications, researchers will be nudged away from riskier, local, interdisciplinary, and long-term work. The result is a treadmill of constant activity with limited progress whereas core scholarly practices โ€“ such as reading, reflecting and engaging with othersโ€™ contributions โ€“ is de-prioritized. What looks like productivity often masks intellectual exhaustion built on a demoralizing, narrowing scientific vision.

A table of profit margins across industries. The section of text related to this table is below:

1. The four-fold drain
1.1 Money
Currently, academic publishing is dominated by profit-oriented, multinational companies for
whom scientific knowledge is a commodity to be sold back to the academic community who
created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis,
which collectively generated over US$7.1 billion in revenue from journal publishing in 2024
alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit
margins have always been over 30% in the last five years, and for the largest publisher
(Elsevier) always over 37%.
Against many comparators, across many sectors, scientific publishing is one of the most
consistently profitable industries (Table S1). These financial arrangements make a substantial
difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor &
Francis revenues were generated in North America, meaning that North American
researchers were charged over US$2.27 billion by just two for-profit publishers. The
Canadian research councils and the US National Science Foundation were allocated US$9.3
billion in that year.

A table of profit margins across industries. The section of text related to this table is below: 1. The four-fold drain 1.1 Money Currently, academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it. The dominant four are Elsevier, Springer Nature, Wiley and Taylor & Francis, which collectively generated over US$7.1 billion in revenue from journal publishing in 2024 alone, and over US$12 billion in profits between 2019 and 2024 (Table 1A). Their profit margins have always been over 30% in the last five years, and for the largest publisher (Elsevier) always over 37%. Against many comparators, across many sectors, scientific publishing is one of the most consistently profitable industries (Table S1). These financial arrangements make a substantial difference to science budgets. In 2024, 46% of Elsevier revenues and 53% of Taylor & Francis revenues were generated in North America, meaning that North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year.

The costs of inaction are plain: wasted public funds, lost researcher time, compromised
scientific integrity and eroded public trust. Today, the system rewards commercial publishers
first, and science second. Without bold action from the funders we risk continuing to pour
resources into a system that prioritizes profit over the advancement of scientific knowledge.

The costs of inaction are plain: wasted public funds, lost researcher time, compromised scientific integrity and eroded public trust. Today, the system rewards commercial publishers first, and science second. Without bold action from the funders we risk continuing to pour resources into a system that prioritizes profit over the advancement of scientific knowledge.

We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a ๐Ÿงต 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...

11.11.2025 11:52 โ€” ๐Ÿ‘ 609    ๐Ÿ” 435    ๐Ÿ’ฌ 8    ๐Ÿ“Œ 62
The Drain of Scientific Publishing The domination of scientific publishing in the Global North by major commercial publishers is harmful to science. We need the most powerful members of the research community, funders, governments and ...

Poniendo los puntos sobre las รญes ๐Ÿ‘

Beigel, F., Brockington, D., Crosetto, P., Derrick, G., Fyfe, A., Barreiro, P. G., Hanson, M. A., Haustein, S., Lariviรจre, V., Noe, C., Pinfield, S., & Wilsdon, J. (2025). The Drain of Scientific Publishing (No. arXiv:2511.04820). arXiv. doi.org/10.48550/arX...

10.11.2025 17:19 โ€” ๐Ÿ‘ 7    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Introduction to INSPECT-SR Training Workshop December An introductory 2-hour online workshop will introduce participants to the INSPECT-SR tool for assessing trustworthiness of randomised controlled...

Have increased capacity for this December INSPECT-SR online training workshop following a successful 1st event today. Book here: www.trybooking.com/uk/FKHV

06.11.2025 17:55 โ€” ๐Ÿ‘ 12    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Post image Post image

A study I wrote to the journal about in May this year was just retracted: www.sciencedirect.com/science/arti...

The review paper claimed that the majority of benefits in clinical trials could be explained by placebo effects.

06.11.2025 00:19 โ€” ๐Ÿ‘ 44    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
PCI Webinar Series - Peer Community In The PCI webinar series is a series of seminars on research practices, publication practices, evaluation, scientific integrity, meta-research, organised by Peer Community In

๐Ÿ“ฃ Save the date for the 13th PCI webinar on December 1st, 2025, at 4 PM CET!! Simine Vazire (University of Melbourne, Australia) will present "Recognizing and responding to a replication crisis: Lessons from Psychology". For more details and registration, visit: buff.ly/wZNoD2v

05.11.2025 10:49 โ€” ๐Ÿ‘ 25    ๐Ÿ” 21    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3

Fun fact: it has been 56 days since I notified the editors of Neurology about glaring statistical errors in this peer-reviewed study on sweeteners and cognitive health.

No expression of concern, no correction, no retraction.

02.11.2025 16:06 โ€” ๐Ÿ‘ 91    ๐Ÿ” 21    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 3
PubPeer - Prevention of acute myocardial infarction induced heart fail... There are comments on PubPeer for publication: Prevention of acute myocardial infarction induced heart failure by intracoronary infusion of mesenchymal stem cells: phase 3 randomised clinical trial (P...

@bmj.com Please look at PubPeer comments on an article you published last week. pubpeer.com/publications...
I think your research integrity dept shld act swiftly on this one, given clinical significance.
I'm aware of even more evidence of problems so let me know if this is not sufficient.

02.11.2025 15:02 โ€” ๐Ÿ‘ 66    ๐Ÿ” 31    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 4
Preview
Check Research Outputs for Best Practices A modular, extendable system for automatically checking research outputs for best practices using text search, R code, and/or (optional) LLM queries.

The package formerly known as papercheck has changed its name to metacheck! We're checking more than just papers, with functions to assess OSF projects, github repos, and AsPredicted pre-registrations, with more being developed all the time.

scienceverse.github.io/metacheck/

03.11.2025 16:20 โ€” ๐Ÿ‘ 72    ๐Ÿ” 32    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Fatal flaws in "The relationship between personality traits and marital satisfaction: a systematic review and meta-analysis": https://osf.io/t8kzm

01.11.2025 14:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Join us on November 18 - 9 am PT | 12 pm ET | 6 pm CET, for 8th Reproducibility Rounds: ๐‡๐จ๐ฐ ๐‘๐ž๐ฉ๐ซ๐จ๐๐ฎ๐œ๐ข๐›๐ฅ๐ž ๐ข๐ฌ ๐‡๐ž๐š๐ฅ๐ญ๐ก ๐’๐œ๐ข๐ž๐ง๐œ๐ž ๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก?. Registration at: stanford.zoom.us/meeting/regi...

31.10.2025 22:56 โ€” ๐Ÿ‘ 3    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@asandovall is following 20 prominent accounts