Olmo van den Akker's Avatar

Olmo van den Akker

@denolmo.bsky.social

Postdoc @ QUEST Center for Responsible Research & Tilburg University. Into research about improving preregistration, secondary data analysis, and peer review.

196 Followers  |  191 Following  |  54 Posts  |  Joined: 01.12.2023  |  1.7637

Latest posts by denolmo.bsky.social on Bluesky

Hi GRIOS, is the contact form on your website functional? I sent a message just after Metascience25 but haven't heard back yet.

01.10.2025 15:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@michelenuijten.bsky.social

18.09.2025 06:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ”“ It's great to see authors sharing their experiences with publishing on MetaROR (MetaResearch Open Review) β€” our open review platform for metascience using the publish–review–curate model: www.openscience.nl/en/cases/the...

09.09.2025 11:42 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Perspective on Scientific Error – 8th Perspectives on Scientific Error Workshop

Here's another conference that aims to bridge fields: errorsin.science/pse8/

In Leiden from 11-13 Feb 2026 (submission deadline 15 October)

08.09.2025 20:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We are about a month away from releasing a complete refresh of the OSF user interface. The team has been working on this for a very long time, and we are very excited to be able to share it soon. A preview picture:

04.09.2025 21:57 β€” πŸ‘ 149    πŸ” 31    πŸ’¬ 10    πŸ“Œ 3

- Journals should state what their aims and scope are from the outset and implement mechanisms to assess whether they achieve those aims. This could also be things like "we want to publish high risk research"
- Meta-research is necessary to find out which journals deserve prestige

#PRC10

04.09.2025 22:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- "The replication crisis forced changes in transparency for the research itself, but not for the publication process"
- We need to raise our expectations for journals? How? Nullius in verba (don't take their word for it!)

#PRC10

04.09.2025 22:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Many interesting tidbits in @simine.com's talk. A selection:
- Journal prestige depends on factors like aims and scope, selectivity, and impact factor, but changes in these factors do not always lead to changes in journal prestige - journal prestige is sticky

#PRC10

04.09.2025 22:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

There is also a publish-review-curate publishing platform specifically dedicated to meta-research: metaror.org

Send your studies on peer review there and be part of the future of science!

(CoI statement: I'm an ERC representative at MetaROR)

#PRC10

04.09.2025 19:20 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

eLife (talk by Nicola Adamson) uses a publish-review-curate method and uses common terms to assess manuscripts.

For strength of evidence: exceptional, compelling, convincing, solid, incomplete, & inadequate

For significance of findings: landmark, fundamental, important, valuable, & useful

#PRC10

04.09.2025 19:17 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

New peer review dataset incoming!

Involves authors, topic area, editorial decision, author characteristics (institutional prestige, region, gender), BoRE evaluations, review characteristics (length, sentiment, z-score, reviewer gender).

(Talk by Aaron Clauset)

#PRC10

04.09.2025 19:02 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Christos Kotanidis checked differences in abstracts between submissions and published papers & assessed whether these differences indicated higher or lower research quality.

Abstracts typically improved, especially in big five medical journals. Evidence for the effectiveness of peer review?

#PRC10

04.09.2025 18:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Andrea Corvillon on distributed vs. panel peer review at the ALMA Observatory:

Most experienced PIs no longer have the best ranks in a distributed review system, but why that is remains unclear.

#PRC10

04.09.2025 16:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Interesting to see that the conference review process (and publishing norms) are do different in the field of computer science compared to other fields.

How do these differences come about? Fundamental differences between fields or chance and inertia?

#PRC10

04.09.2025 16:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Alexander Goldberg did it by a 7-point Likert scale for overall review quality but also by assessing 4 sub-categories: reviewers' understanding of the paper, whether important elements were covered, whether reviewers substantiated their comments, and the constructiveness of reviewer comments.

04.09.2025 16:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Di Girolamo explains why the use of the phrase "to our knowledge" lacks reproducibility and accountability.

Good trigger to make an edit in a grant proposal I'm writing.

#PRC10

04.09.2025 15:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Note by Yulin Yu: Data repurposing may serve as an essential driving mechanism driving scientific innovation BUT may not always garner immediate recognition.

04.09.2025 14:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Data repurposing: taking existing data and reusing it for a different purpose.

(Presentation by Yulin Yu)

Studies repurposing data are at higher risk of bias, so make sure to preregister them (check here for a template): research.tilburguniversity.edu/en/publicati...

#PRC10

04.09.2025 14:40 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Different findings in terms of time and industry funding than in an earlier meta-analysis by Robert Thibault and others: www.medrxiv.org/content/10.1...

Can this discrepancy be explained by the use of AI?

#PRC10

04.09.2025 14:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image

Ian Bulovic used OpenAI's GPT to assess selective outcome reporting.

Findings:
- Much outcome switching but decrease over time
- Industry-sponsored trials most at risk
- Assessing outcome switching may seem trivial but is even hard for human coders

#PRC10

04.09.2025 14:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ioannidis: Do we enough evidence for your proposed actions to improve peer review?

Macleod: The evidence is thin, partly because many journals are hesitant to accommodate meta-research, like RCTs.

#PRC10

04.09.2025 13:29 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

A meta-perspective by Malcolm Macleod on the presentations at #PRC10.

Are we going for low hanging fruit too much in research on peer review / publication?

04.09.2025 13:23 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

A question to kickstart day 2 of #PRC10:

How would you measure the quality of peer reviews in a scientific study?

Single question? Scale? How many raters? AI?

04.09.2025 12:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Leslie McIntosh:

Markers of (dis)trust in science: Pay attention to email addresses (use of hotmail.com and underscores) and institutional affiliations (new and unknown organizations without verifiable addresses)

#PRC10

03.09.2025 16:55 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Open peer review urgently requires evidence: A call to action Open Peer Review is gaining prominence in attention and use, but to responsibly open up peer review there is an urgent need for additional evidence. This Perspective proposes a preliminary research ag...

See also this piece by Tony Ross-Hellauer and Serge Horbach: doi.org/10.1371/jour...

03.09.2025 16:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

John Ioannidis: "We need more RCTs"

I agree, so here is an urgent call to the representatives of journals at #PRC10: Let's empirically test suggested improvements to peer review like open reports, open identities, structured review, results-free review, collaborative review, etc.

Get in touch!

03.09.2025 16:42 β€” πŸ‘ 14    πŸ” 4    πŸ’¬ 4    πŸ“Œ 0
Post image Post image

How do paper mills operate? I always thought they were in cahoots with illegitimate journals but apparently they target normal journals as well, with the editors of those journals playing no role.

(Talk by Tim Kersjes from Springer Nature)

Could open review reports solve this issue?

#PRC10

03.09.2025 15:38 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Question in the Q&A: How do we know Pangram is a valid tool to detect AI use? Fair question, I would have wanted to see more info about their validation process.

#PRC10

03.09.2025 14:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@royperlis.bsky.social based on a study using Pangram to detect AI use in papers: "Less than 25% of authors using GenAI are disclosing its use"

Why is this the case? Do people feel shame for using AI to improve their studies / papers? Do journals discourage (disclosure of) AI use?

#PRC10

03.09.2025 14:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Findings from Mario's study:
- Open reviews include more sentences, mainly involving suggestions and solutions indicating more constructive reviews
- Open reviews had higher information content scores

His explanation: There is more accountability in an open system

#PRC10

03.09.2025 14:11 β€” πŸ‘ 18    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

@denolmo is following 20 prominent accounts