Jamie Cummins's Avatar

Jamie Cummins

@jamiecummins.bsky.social

Currently a visiting researcher at @bennettoxford.bsky.social. Normally at Uni of Bern. Meta-scientist building tools to help other scientists. NLP, simulation, & LLMs. Creator and developer of RegCheck (https://regcheck.app). 1/4 of @error.reviews. ๐Ÿ‡ฎ๐Ÿ‡ช

2,863 Followers  |  720 Following  |  954 Posts  |  Joined: 24.06.2023
Posts Following

Posts by Jamie Cummins (@jamiecummins.bsky.social)

๐Ÿซถ

04.03.2026 20:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Here's to a radical future where openly-available IPD is the default!

04.03.2026 17:05 โ€” ๐Ÿ‘ 7    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Clinical trial reforms that once seemed radical How randomized controlled trials, preregistration, and results reporting became standard practice.

New post!

It may seem ambitious to ask for individual patient data from clinical trials to be shared, anonymized, for use by other researchers.

But the history of medicine shows us that clinical trials have already undergone a series of transformations that once seemed equally bold:

04.03.2026 16:57 โ€” ๐Ÿ‘ 72    ๐Ÿ” 22    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 3
OSF

Two new papers from the lab on research practices in security & privacy research: 1) Reliability of measures and the use of Cronbach's ฮฑ osf.io/preprints/ps..., 2) Practices around retraction and correction notices by ACM and IEEE (osf.io/preprints/ps...)

03.03.2026 09:59 โ€” ๐Ÿ‘ 20    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

A propos of nothing, that IT response feels very ChatGPT

02.03.2026 12:11 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Should everyone be taking statins?

New episode of HARD DRUGS!

Should everyone be taking statins?

Statins have revolutionised heart disease and they're one of many reasons for the long-term decline in cardiovascular mortality.

27.02.2026 20:20 โ€” ๐Ÿ‘ 53    ๐Ÿ” 8    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 6

Why do I have to pretend that I'm going to print something in order to save it as a PDF. Why do I have to engage in a little ruse.

23.02.2026 21:43 โ€” ๐Ÿ‘ 19285    ๐Ÿ” 2922    ๐Ÿ’ฌ 344    ๐Ÿ“Œ 1
Preview
OpenSAFELY news: you can apply to do non-COVID research, from today! | Bennett Institute for Applied Data Science We are delighted to announce that - from today - you can submit applications to the OpenSAFELY service for non-COVID-19 studies.

OpenSAFELY is open from today! Huge thanks to all who supported this vast collaboration: whole population GP data; in a productive platform; innovative privacy protections; unprecedented support from professions, privacy campaigners; &c

Now it's over to users!

www.bennett.ox.ac.uk/blog/2026/02...

23.02.2026 16:17 โ€” ๐Ÿ‘ 171    ๐Ÿ” 84    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 15
Video thumbnail

We regret to inform you that your paper cannot be considered for publication, but we encourage you to submit it to our GOLD Open Access sister journal

23.02.2026 10:41 โ€” ๐Ÿ‘ 50    ๐Ÿ” 15    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We have something quite related in the works!

20.02.2026 11:41 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The latest episode of our @bpsofficial.bsky.social @researchdigest.bsky.social podcast PsychCrunch.

We took a different approach to this one, and I'm really grateful to @maddipow.bsky.social @richardwiseman.bsky.social @margaritap.bsky.social @jamiecummins.bsky.social and others for contributing.

19.02.2026 11:19 โ€” ๐Ÿ‘ 5    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Economists!

I am looking for someone to coauthor an article on the massive decline in costs of genome sequencing.

The science is all fine, but I'm interested in the economics of it all: the innovation, funding, prizes, patents, etc.

Does anyone come to mind? Thanks!

11.01.2026 22:07 โ€” ๐Ÿ‘ 65    ๐Ÿ” 40    ๐Ÿ’ฌ 13    ๐Ÿ“Œ 4

Thanks Alexander! I'm always happy to hear about how you use it in the future & whether you notice anything that could be improved.

18.02.2026 08:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Just used this (very interesting!) paper as an example to try out RegCheck v2 for the first time.

RegCheck.app by @jamiecummins.bsky.social et al. is an OpenSource LLM-based tools to compare a published study against a replication plan. All the usual LLM caveats apply. But seems pretty useful.

17.02.2026 16:46 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

thanks so much for the share Tiago ๐Ÿซ‚

18.02.2026 08:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Really nice piece on @psychmag.bsky.social by @jamiecummins.bsky.social this month!

Thanks for writing this Jamie! ๐Ÿ‘Œ๐Ÿผ

16.02.2026 17:30 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ“ฃ New preprint!

We brought together experts from academia, major video game studios, NGOs, funding bodies, and civil society groups to ask: what should be prioritised when it comes to the future of video games? ๐Ÿงต

17.02.2026 14:06 โ€” ๐Ÿ‘ 23    ๐Ÿ” 13    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

why-not-both.png

14.02.2026 13:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Aligns with some of our experiences too. Hopefully our paper can help folk to reevaluate some of their own squeamishness

14.02.2026 13:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

3. Generating clearer inputs for downstream tools

Imagine you're in a world in which LLMs play an increasingly larger role in research. Tools like RegCheck screen for compliance against pre-analysis plans. It may all the more important to ensure that your plans are clearly documented. 6/

13.02.2026 20:21 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

I've built a new tool!

You can upload your pre-analysis plan or registered report, pre-submission to a registry or journal, and it will screen it for completeness, clarity, and consistency. 1/ ๐Ÿงต

13.02.2026 20:21 โ€” ๐Ÿ‘ 31    ๐Ÿ” 14    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2

This is a no-brainer. Metascience is not accountable if it is not transparent about the research that it uses or critiques.

13.02.2026 18:49 โ€” ๐Ÿ‘ 20    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Should meta-science articles hold themselves to the same standards of transparency and reproducibility that the field prescribes for others?

We think so.

New preprint and ๐Ÿงต

13.02.2026 16:55 โ€” ๐Ÿ‘ 36    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Meta-scientists cannot demand transparency from scientists while failing to meet those same standards. In this paper we make one simple recommendation: meta-scientific data should be deanonymised by default.

13.02.2026 16:59 โ€” ๐Ÿ‘ 20    ๐Ÿ” 5    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Against Anonymising Meta-Scientific Data: https://osf.io/6eyjf

13.02.2026 16:40 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.

13.02.2026 16:50 โ€” ๐Ÿ‘ 97    ๐Ÿ” 52    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 7

Our lab has the capacity to test ~500 uni students each semester
If youโ€™re a researcher in cognitive psychology or metascience and need data collection support, weโ€™d love to collaborate. We can help collect high-quality data from a large student sample.
Get in touch to discuss potential projects!

12.02.2026 12:18 โ€” ๐Ÿ‘ 26    ๐Ÿ” 22    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

shot you a mail!

11.02.2026 17:28 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Extremely cool. I'd be eager to play around with the rater-level ratings data when you make it open.

11.02.2026 17:18 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
It must be very hard to publish null results
Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.

11.02.2026 17:00 โ€” ๐Ÿ‘ 640    ๐Ÿ” 223    ๐Ÿ’ฌ 30    ๐Ÿ“Œ 51