Alejandro Sandoval-Lentisco's Avatar

Alejandro Sandoval-Lentisco

@asandovall.bsky.social

Postdoctoral fellow at METRICS @stanford.edu Interested in everything related to meta-research and evidence synthesis https://sandovallentisco.github.io/

224 Followers  |  381 Following  |  14 Posts  |  Joined: 02.01.2024
Posts Following

Posts by Alejandro Sandoval-Lentisco (@asandovall.bsky.social)

Preview
Application Form Senior Editor Clinical/Associate Editor Social Section Collabra: Psychology Starting from 1 July 2026, Collabra: Psychology is on the look-out for a new senior editor for the clinical section as well as several new associate editors for the social section. If you are interest...

Contribute to open science! Collabra: Psychology needs a new senior editor for the clinical section as well as several new associate editors for the social section. If you are interested, please fill out the application form before 30 April 2026. Repost please!
forms.gle/DgM3484SuLVD...

06.03.2026 17:47 β€” πŸ‘ 11    πŸ” 11    πŸ’¬ 1    πŸ“Œ 0
Post image

The FORRT Library of Reproduction and Replication Attempts (FLoRA) will be the basis of upcoming tools and projects that help replications and reproductions become a natural and difficult to ignore part of research. It is already implemented in our FLoRA Annotator: forrt.org/annotator/

06.03.2026 07:51 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Where have all the comments gone?
For decades, the American Economic Review regularly published formal comments β€” papers that replicate, reassess, or challenge earlier AER articles.
In our latest blog post, we show: they’ve nearly disappeared.

26.02.2026 13:01 β€” πŸ‘ 17    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Preview
How Ten Publishers Retract Research Retractions are the primary mechanism for correcting the scholarly record, yet publishers differ markedly in how they use them. We present a bibliometric analysis of 46,087 retractions across 10 major...

In which we learn @acm.org is exceptionally bad at retracting articles that need retracting:

arxiv.org/abs/2602.191...

24.02.2026 12:54 β€” πŸ‘ 10    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Post image

It’s out!

22.02.2026 01:35 β€” πŸ‘ 214    πŸ” 29    πŸ’¬ 7    πŸ“Œ 1

Our updated strategic plan!

The major change (improvement?!) is focus. COS does many things. My insistence that they all fit together in my head was apparently insufficient.

Now, we are trying to improve clarity, accountability, and effectiveness by aligning activities on a true north objective.

19.02.2026 15:39 β€” πŸ‘ 20    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

I have not seen this mentioned here, but this is a great example of how the research world has dramatically changed underneath of us and if you're not keeping up you're going to be left behind quickly:

yiqingxu.org/papers/2026_...

19.02.2026 17:03 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 4    πŸ“Œ 1
Preview
Sunsetting TOPFactor.org: What’s Changing and Why COS shares our plans to sunset TopFactor.org, what prompted the decision, and how the research community can keep advancing open and transparent policymaking.

Since 2020, TOP Factor has helped researchers understand how journals support open practices. On March 16, 2026, COS will be sunsetting the tool.

Read more about what prompted the decision, what we've learned, and the future of open and transparent policymaking: www.cos.io/blog/sunsett...

18.02.2026 19:14 β€” πŸ‘ 8    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0
OSF

Check out our preprint: "What Pilot Studies Can (and Cannot) Do for Validity in Psychological Research"

Great job @yashvin.bsky.social and @mbneff.bsky.social for leading!

doi.org/10.31234/osf...

16.02.2026 10:38 β€” πŸ‘ 16    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
first slide of presentation with OSC logo, and social media handles, name of presenter. the title says From the replicability crisis to credible science, and has three badges 1. preregistered: 100% p-hacking free, 2. open data: Here, check our numbers, 3. open materials: here's how you can replicate our results

first slide of presentation with OSC logo, and social media handles, name of presenter. the title says From the replicability crisis to credible science, and has three badges 1. preregistered: 100% p-hacking free, 2. open data: Here, check our numbers, 3. open materials: here's how you can replicate our results

Taught bavarian center for cancer research students about

Preregistration β†’ more reliable research
Reproducible workflows
FAIR data managmt β†’ higher-quality, reusable data

Yes, FAIR & sensitive medical data are compatible

Slides osf.io/p9sev/files/...
Tutorials lmu-osc.github.io/training/sel...

17.02.2026 19:26 β€” πŸ‘ 18    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1

New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.

13.02.2026 16:50 β€” πŸ‘ 98    πŸ” 52    πŸ’¬ 6    πŸ“Œ 7

vazul: An R Package for Analysis Blinding: https://osf.io/mp54s

12.02.2026 22:39 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Introduction to INSPECT-SR Training Workshop March (Europe) An introductory 2-hour online workshop will introduce participants to the INSPECT-SR tool for assessing trustworthiness of randomised controlled...

Next free online INSPECT-SR training workshop on March 6th. Register here: www.trybooking.com/uk/FZUN Timed for Europe. Will try to add something for North America in the next few days...

10.02.2026 16:57 β€” πŸ‘ 7    πŸ” 6    πŸ’¬ 0    πŸ“Œ 1
Post image

Early draft of my ebook for the course:

ianhussey.quarto.pub/reproducible...

05.02.2026 12:40 β€” πŸ‘ 28    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Preview
Teaching Open Science β€˜With open science becoming normative in academic research, it is important that it becomes part of student training to set students up for success. A key philosophy of Teaching Open Science is that open science principles are integrated throughout research education and training. Teachers will appreciate the concrete and comprehensive β€œhow to” guidance across its 12 engaging chapters.’ – Brian Nosek, Center for Open Science and University of Virginia, USA

πŸ“š Now available for preorder! Teaching Open Science is a practical guide for incorporating open science principles into teaching & learning across diverse contexts. Crystal Steltenpohl, COS Training & Education Manager, is lead author of the chapter on qualitative approaches

06.02.2026 19:00 β€” πŸ‘ 16    πŸ” 4    πŸ’¬ 2    πŸ“Œ 1

Reporting Practices, Open Science Practices, and Trustworthiness of Simulation Studies in Psychology: A Questionnaire Study: https://osf.io/jn9sy

05.02.2026 15:08 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Now with more Rat dck!

05.02.2026 16:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
ggrxiv - Personalized Paper Recommendations Get personalized research paper recommendations from arXiv and bioRxiv delivered to your inbox. Stay up-to-date with the latest research in your field.

Check out this tool to get paper recommendations everyday from arxiv and biorxiv, very simple but effective! www.ggrxiv.com

04.02.2026 18:51 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Diagram showing four phases of methodological research (Theory, Exploration, Systematic Comparison, Evidence Synthesis) with an arrow indicating that preregistration usefulness increases from early to late phases. Each phase lists its aim, elements, outcome, and an example from factor retention research.

Diagram showing four phases of methodological research (Theory, Exploration, Systematic Comparison, Evidence Synthesis) with an arrow indicating that preregistration usefulness increases from early to late phases. Each phase lists its aim, elements, outcome, and an example from factor retention research.

Does it make sense to preregister simulation studies?
This question has sparked a lot of debate.

▢️We* work through the why, when, and how
▢️We discuss different phases of methodological research to clarify where preregistration might (or might not) add value

πŸ“ Preprint: doi.org/10.31234/osf...

04.02.2026 10:40 β€” πŸ‘ 37    πŸ” 13    πŸ’¬ 1    πŸ“Œ 0

This headline number has generated a lot of attention, but does not account for the classifier's accuracy. @jamiecummins.bsky.social and I wrote a short commentary showing that, assuming a paper mill base rate of 10%, 30% of the flagged papers are false positives. At a base rate of 5%, 50% are FPs.

03.02.2026 13:44 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Preview
A framework for assessing the trustworthiness of scientific research findings1 | PNAS Vigorous debate has erupted over the trustworthiness of scientific research findings in a number of domains. The question “what makes research find...

Our new paper, with colleagues from the Strategic Council of the National Academies, offers an integrative framework of the several components that contribute to making research findings trustworthy including ethics, methodology, transparency, inclusion, assessment, etc

www.pnas.org/doi/10.1073/...

03.02.2026 19:27 β€” πŸ‘ 38    πŸ” 17    πŸ’¬ 1    πŸ“Œ 3
Preview
Mega-journal Heliyon retracts hundreds of papers after internal audit Heliyon has published fewer papers and ramped up its retractions since a major indexing service put the journal on hold and the publisher launched an audit of all papers published in the journal si…

Heliyon has published fewer papers and ramped up its retractions since a major indexing service put the journal on hold and the publisher launched an audit of all papers published in the journal since its launch in 2016.

03.02.2026 17:39 β€” πŸ‘ 17    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2
Preview
AStA Advances in Statistical Analysis AStA Advances in Statistical Analysis is a quarterly journal that publishes original contributions on statistical methodology, applications, and review ...

Advances in Statistical Analysis has a call for papers on the role of multiverse analysis in statistical modelling and applications: link.springer.com/journal/1018...

Deadline is May 1st, so still plenty of time to put something together!>

03.02.2026 17:42 β€” πŸ‘ 28    πŸ” 12    πŸ’¬ 2    πŸ“Œ 1

Interesting! AI-assisted assessment of responsible research practices. www.biorxiv.org/content/10.6...

#metascience #meta-science #meta-research

03.02.2026 18:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Likelihood Ratio Test for Publication Bias – a proof of concept - MetaROR

Publication bias poses a serious challenge to clarity and precision in scientific research & meta-analyses. This article by PaweΕ‚ Lenartowicz poses a way to deal with this: the Likelihood Ratio Test for Publication Bias.

πŸ‘‡ Read the editorial assessment, peer reviews, and full article on MetaROR now

03.02.2026 16:49 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

Wiley: "We’re supporting responsible research assessment practices" onlinelibrary.wiley.com/journal/1520...

Also Wiley: "Prove that your article is a good fit for this journal πŸ˜‰πŸ˜‰πŸ˜‰πŸ˜‰πŸ˜‰ by citing at least two of our articles in your manuscript before we will even consider reviewing it" 🀑

30.01.2026 11:29 β€” πŸ‘ 65    πŸ” 31    πŸ’¬ 9    πŸ“Œ 8

Preregistration Works: Increased Reporting Quality, Internal Validity, and Protocol Adherence in Animal Studies: https://osf.io/ruw7p

30.01.2026 22:34 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

πŸ“… Mark your calendars for #SIPS2027!
The 2027 SIPS conference, organized in collaboration with the Association for Interdisciplinary Meta-Research and Open Science @aimosinc.bsky.social, will be held in November at the University of Melbourne, Australia.

We are looking forward to seeing you there!

27.01.2026 15:21 β€” πŸ‘ 38    πŸ” 24    πŸ’¬ 0    πŸ“Œ 2

The Iowa Gambling Task is an extreme example of Jingle Fallacy and schmeasurement.

In 100 articles we found 244 different ways of scoring it, 177 were never reused. Correlations between them range -.99 to .99.

At the same time, we show meta-analyses combine these results as if they’re equivalent.

25.01.2026 12:01 β€” πŸ‘ 140    πŸ” 54    πŸ’¬ 5    πŸ“Œ 4
Preview
RegCheck RegCheck is an AI tool to compare preregistrations with papers instantly.

Comparing registrations to published papers is essential to research integrity - and almost no one does it routinely because it's slow, messy, and time-demanding.

RegCheck was built to help make this process easier.

Today, we launch RegCheck V2.

🧡

regcheck.app

22.01.2026 11:05 β€” πŸ‘ 174    πŸ” 90    πŸ’¬ 8    πŸ“Œ 6