Hi GRIOS, is the contact form on your website functional? I sent a message just after Metascience25 but haven't heard back yet.
01.10.2025 15:13 β π 0 π 0 π¬ 1 π 0@denolmo.bsky.social
Postdoc @ QUEST Center for Responsible Research & Tilburg University. Into research about improving preregistration, secondary data analysis, and peer review.
Hi GRIOS, is the contact form on your website functional? I sent a message just after Metascience25 but haven't heard back yet.
01.10.2025 15:13 β π 0 π 0 π¬ 1 π 0@michelenuijten.bsky.social
18.09.2025 06:54 β π 1 π 0 π¬ 0 π 0π It's great to see authors sharing their experiences with publishing on MetaROR (MetaResearch Open Review) β our open review platform for metascience using the publishβreviewβcurate model: www.openscience.nl/en/cases/the...
09.09.2025 11:42 β π 4 π 3 π¬ 0 π 0Here's another conference that aims to bridge fields: errorsin.science/pse8/
In Leiden from 11-13 Feb 2026 (submission deadline 15 October)
We are about a month away from releasing a complete refresh of the OSF user interface. The team has been working on this for a very long time, and we are very excited to be able to share it soon. A preview picture:
04.09.2025 21:57 β π 149 π 31 π¬ 10 π 3- Journals should state what their aims and scope are from the outset and implement mechanisms to assess whether they achieve those aims. This could also be things like "we want to publish high risk research"
- Meta-research is necessary to find out which journals deserve prestige
#PRC10
- "The replication crisis forced changes in transparency for the research itself, but not for the publication process"
- We need to raise our expectations for journals? How? Nullius in verba (don't take their word for it!)
#PRC10
Many interesting tidbits in @simine.com's talk. A selection:
- Journal prestige depends on factors like aims and scope, selectivity, and impact factor, but changes in these factors do not always lead to changes in journal prestige - journal prestige is sticky
#PRC10
There is also a publish-review-curate publishing platform specifically dedicated to meta-research: metaror.org
Send your studies on peer review there and be part of the future of science!
(CoI statement: I'm an ERC representative at MetaROR)
#PRC10
eLife (talk by Nicola Adamson) uses a publish-review-curate method and uses common terms to assess manuscripts.
For strength of evidence: exceptional, compelling, convincing, solid, incomplete, & inadequate
For significance of findings: landmark, fundamental, important, valuable, & useful
#PRC10
New peer review dataset incoming!
Involves authors, topic area, editorial decision, author characteristics (institutional prestige, region, gender), BoRE evaluations, review characteristics (length, sentiment, z-score, reviewer gender).
(Talk by Aaron Clauset)
#PRC10
Christos Kotanidis checked differences in abstracts between submissions and published papers & assessed whether these differences indicated higher or lower research quality.
Abstracts typically improved, especially in big five medical journals. Evidence for the effectiveness of peer review?
#PRC10
Andrea Corvillon on distributed vs. panel peer review at the ALMA Observatory:
Most experienced PIs no longer have the best ranks in a distributed review system, but why that is remains unclear.
#PRC10
Interesting to see that the conference review process (and publishing norms) are do different in the field of computer science compared to other fields.
How do these differences come about? Fundamental differences between fields or chance and inertia?
#PRC10
Alexander Goldberg did it by a 7-point Likert scale for overall review quality but also by assessing 4 sub-categories: reviewers' understanding of the paper, whether important elements were covered, whether reviewers substantiated their comments, and the constructiveness of reviewer comments.
04.09.2025 16:16 β π 0 π 0 π¬ 0 π 0Di Girolamo explains why the use of the phrase "to our knowledge" lacks reproducibility and accountability.
Good trigger to make an edit in a grant proposal I'm writing.
#PRC10
Note by Yulin Yu: Data repurposing may serve as an essential driving mechanism driving scientific innovation BUT may not always garner immediate recognition.
04.09.2025 14:41 β π 0 π 0 π¬ 0 π 0Data repurposing: taking existing data and reusing it for a different purpose.
(Presentation by Yulin Yu)
Studies repurposing data are at higher risk of bias, so make sure to preregister them (check here for a template): research.tilburguniversity.edu/en/publicati...
#PRC10
Different findings in terms of time and industry funding than in an earlier meta-analysis by Robert Thibault and others: www.medrxiv.org/content/10.1...
Can this discrepancy be explained by the use of AI?
#PRC10
Ian Bulovic used OpenAI's GPT to assess selective outcome reporting.
Findings:
- Much outcome switching but decrease over time
- Industry-sponsored trials most at risk
- Assessing outcome switching may seem trivial but is even hard for human coders
#PRC10
Ioannidis: Do we enough evidence for your proposed actions to improve peer review?
Macleod: The evidence is thin, partly because many journals are hesitant to accommodate meta-research, like RCTs.
#PRC10
A meta-perspective by Malcolm Macleod on the presentations at #PRC10.
Are we going for low hanging fruit too much in research on peer review / publication?
A question to kickstart day 2 of #PRC10:
How would you measure the quality of peer reviews in a scientific study?
Single question? Scale? How many raters? AI?
Leslie McIntosh:
Markers of (dis)trust in science: Pay attention to email addresses (use of hotmail.com and underscores) and institutional affiliations (new and unknown organizations without verifiable addresses)
#PRC10
See also this piece by Tony Ross-Hellauer and Serge Horbach: doi.org/10.1371/jour...
03.09.2025 16:50 β π 1 π 0 π¬ 0 π 0John Ioannidis: "We need more RCTs"
I agree, so here is an urgent call to the representatives of journals at #PRC10: Let's empirically test suggested improvements to peer review like open reports, open identities, structured review, results-free review, collaborative review, etc.
Get in touch!
How do paper mills operate? I always thought they were in cahoots with illegitimate journals but apparently they target normal journals as well, with the editors of those journals playing no role.
(Talk by Tim Kersjes from Springer Nature)
Could open review reports solve this issue?
#PRC10
Question in the Q&A: How do we know Pangram is a valid tool to detect AI use? Fair question, I would have wanted to see more info about their validation process.
#PRC10
@royperlis.bsky.social based on a study using Pangram to detect AI use in papers: "Less than 25% of authors using GenAI are disclosing its use"
Why is this the case? Do people feel shame for using AI to improve their studies / papers? Do journals discourage (disclosure of) AI use?
#PRC10
Findings from Mario's study:
- Open reviews include more sentences, mainly involving suggestions and solutions indicating more constructive reviews
- Open reviews had higher information content scores
His explanation: There is more accountability in an open system
#PRC10