๐ซถ
04.03.2026 20:36 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0@jamiecummins.bsky.social
Currently a visiting researcher at @bennettoxford.bsky.social. Normally at Uni of Bern. Meta-scientist building tools to help other scientists. NLP, simulation, & LLMs. Creator and developer of RegCheck (https://regcheck.app). 1/4 of @error.reviews. ๐ฎ๐ช
๐ซถ
04.03.2026 20:36 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Here's to a radical future where openly-available IPD is the default!
04.03.2026 17:05 โ ๐ 7 ๐ 1 ๐ฌ 0 ๐ 0
New post!
It may seem ambitious to ask for individual patient data from clinical trials to be shared, anonymized, for use by other researchers.
But the history of medicine shows us that clinical trials have already undergone a series of transformations that once seemed equally bold:
Two new papers from the lab on research practices in security & privacy research: 1) Reliability of measures and the use of Cronbach's ฮฑ osf.io/preprints/ps..., 2) Practices around retraction and correction notices by ACM and IEEE (osf.io/preprints/ps...)
03.03.2026 09:59 โ ๐ 20 ๐ 8 ๐ฌ 0 ๐ 1A propos of nothing, that IT response feels very ChatGPT
02.03.2026 12:11 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
New episode of HARD DRUGS!
Should everyone be taking statins?
Statins have revolutionised heart disease and they're one of many reasons for the long-term decline in cardiovascular mortality.
Why do I have to pretend that I'm going to print something in order to save it as a PDF. Why do I have to engage in a little ruse.
23.02.2026 21:43 โ ๐ 19285 ๐ 2922 ๐ฌ 344 ๐ 1
OpenSAFELY is open from today! Huge thanks to all who supported this vast collaboration: whole population GP data; in a productive platform; innovative privacy protections; unprecedented support from professions, privacy campaigners; &c
Now it's over to users!
www.bennett.ox.ac.uk/blog/2026/02...
We regret to inform you that your paper cannot be considered for publication, but we encourage you to submit it to our GOLD Open Access sister journal
23.02.2026 10:41 โ ๐ 50 ๐ 15 ๐ฌ 1 ๐ 0We have something quite related in the works!
20.02.2026 11:41 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
The latest episode of our @bpsofficial.bsky.social @researchdigest.bsky.social podcast PsychCrunch.
We took a different approach to this one, and I'm really grateful to @maddipow.bsky.social @richardwiseman.bsky.social @margaritap.bsky.social @jamiecummins.bsky.social and others for contributing.
Economists!
I am looking for someone to coauthor an article on the massive decline in costs of genome sequencing.
The science is all fine, but I'm interested in the economics of it all: the innovation, funding, prizes, patents, etc.
Does anyone come to mind? Thanks!
Thanks Alexander! I'm always happy to hear about how you use it in the future & whether you notice anything that could be improved.
18.02.2026 08:40 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Just used this (very interesting!) paper as an example to try out RegCheck v2 for the first time.
RegCheck.app by @jamiecummins.bsky.social et al. is an OpenSource LLM-based tools to compare a published study against a replication plan. All the usual LLM caveats apply. But seems pretty useful.
thanks so much for the share Tiago ๐ซ
18.02.2026 08:40 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Really nice piece on @psychmag.bsky.social by @jamiecummins.bsky.social this month!
Thanks for writing this Jamie! ๐๐ผ
๐ฃ New preprint!
We brought together experts from academia, major video game studios, NGOs, funding bodies, and civil society groups to ask: what should be prioritised when it comes to the future of video games? ๐งต
why-not-both.png
14.02.2026 13:54 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Aligns with some of our experiences too. Hopefully our paper can help folk to reevaluate some of their own squeamishness
14.02.2026 13:53 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
3. Generating clearer inputs for downstream tools
Imagine you're in a world in which LLMs play an increasingly larger role in research. Tools like RegCheck screen for compliance against pre-analysis plans. It may all the more important to ensure that your plans are clearly documented. 6/
I've built a new tool!
You can upload your pre-analysis plan or registered report, pre-submission to a registry or journal, and it will screen it for completeness, clarity, and consistency. 1/ ๐งต
This is a no-brainer. Metascience is not accountable if it is not transparent about the research that it uses or critiques.
13.02.2026 18:49 โ ๐ 20 ๐ 6 ๐ฌ 0 ๐ 0
Should meta-science articles hold themselves to the same standards of transparency and reproducibility that the field prescribes for others?
We think so.
New preprint and ๐งต
Meta-scientists cannot demand transparency from scientists while failing to meet those same standards. In this paper we make one simple recommendation: meta-scientific data should be deanonymised by default.
13.02.2026 16:59 โ ๐ 20 ๐ 5 ๐ฌ 2 ๐ 0Against Anonymising Meta-Scientific Data: https://osf.io/6eyjf
13.02.2026 16:40 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 2New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.
13.02.2026 16:50 โ ๐ 97 ๐ 52 ๐ฌ 6 ๐ 7
Our lab has the capacity to test ~500 uni students each semester
If youโre a researcher in cognitive psychology or metascience and need data collection support, weโd love to collaborate. We can help collect high-quality data from a large student sample.
Get in touch to discuss potential projects!
shot you a mail!
11.02.2026 17:28 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Extremely cool. I'd be eager to play around with the rater-level ratings data when you make it open.
11.02.2026 17:18 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
11.02.2026 17:00 โ ๐ 640 ๐ 223 ๐ฌ 30 ๐ 51