Yeah, I really can't stress enough that AI code assistants royally fuck up statistical analyses. And they do it with absolute confidence.
28.02.2026 17:38 β π 26 π 4 π¬ 1 π 2Yeah, I really can't stress enough that AI code assistants royally fuck up statistical analyses. And they do it with absolute confidence.
28.02.2026 17:38 β π 26 π 4 π¬ 1 π 2
Given the recent spicy #rstats threads, an important reminder:
There's no single right way to code
If it works for you, that's all that matters really
Be good to your fellow coders
@martamiori.bsky.social and I have been writing, since 2024, about why Labour's 'Reform' challenge and emphasis was based on a misunderstanding of Labour's vote. Here for anyone interested: politicscentre.nuffield.ox.ac.uk/news-and-eve...
27.02.2026 08:02 β π 113 π 61 π¬ 2 π 16Someone should make a Simpson's meme for maximum relevance. In the meantime:
26.02.2026 04:41 β π 8 π 2 π¬ 0 π 0
This recent RCT of an "AI stethoscope" claims the technology "shows promise" for diagnosing cardiovascular conditions.
It does not.
It is a textbook example of the risks of conducting unprincipled 'per protocol analyses'. Once again, peer review at a major medical journal has failed.
π§΅ 1/
We're a little late for #LoveData26 but it's never too late to introduce our new data editors! These folks will guide our authors' compliance with ESA's Open Research Policy.
24.02.2026 14:16 β π 3 π 2 π¬ 1 π 1
Now recommended on @peercommunityin.bsky.social!
ecology.peercommunityin.org/articles/rec...
TADA! Guidelines to Improve Code Sharing by @joelpick.bsky.social @eivimeycook.bsky.social et al!
bsky.app/profile/eivi...
π£ We are looking for others to replicate our work!
π More information: nanobubbles.hypotheses.org/replication-...
What could this look like? Some potential guidelines:
1. Try to use no more than 100 words in the whole poster (outside of text inside of tables and figures).
2. Make the figures giant.
3. Present *way less information.*
4. Ask a question on the poster to engage the audience.
With everything going on in the world, what better time than now to re-up my thoughts (that is, rant) about scientific posters? (Thread!)
19.02.2026 18:48 β π 40 π 14 π¬ 2 π 6Sian Henley and I have just had a NERC grant funded, looking at variation in Souther Ocean diatom traits under fluctuating light regimes. If you are a fearless experimentalist interested in working with us as a postdoc on this 3 year project, come chat with me at Ocean Sciences in Glasgow next week!
19.02.2026 10:40 β π 11 π 5 π¬ 1 π 0
Led by my talented PhD student @justine-armg.bsky.social weβre running a #meta-analysis of #cross-sex-genetic-correlations in fitness components.
If you have unpublished data or know of studies that might not appear in a systematic search, please reach out, weβd love to include them!
Please share!
I just did the dumbest thing of my entire career to prove a much more serious point.
I tricked ChatGPT and Google, and made them tell other users Iβm a competitive hot-dog-eating world champion
People are using this trick on a massive scale to make AI tell you lies. Iβll explain how I did it
π’π’π’Lectureships at Bristol!π’π’π’
We're hiring 3 x lecturers (=assistant professor) in Biological Sciences, across the discipline.
Great department, great colleagues, great building, great city
Details here:
www.bristol.ac.uk/jobs/find/de...
This is particularly weird because most of Open Science boils down to: just be honest. Write down in your paper what you actually did. Seems a rather low bar for any science, but apparently pointing this out touches upon a very sensitive issue for some fellow academics.
19.02.2026 05:41 β π 31 π 9 π¬ 1 π 0
This highlights important differences.
If the LaTeX code is wrong, it'll (nearly always) be obvious. Garbled table -> try again.
And ultimately, it doesn't matter in the same way. Nice docs are nice, but people don't die from misformated bullet point
If analysis code is wrong, it's not obvious.
New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.
13.02.2026 16:50 β π 97 π 52 π¬ 6 π 7
Looks like a cool idea @katelaskowski.bsky.social @joelpick.bsky.social
Replication studies: a win-win for early-career training and behavioral ecology url: academic.oup.com/beheco/artic...
Happy birthday to one of my favourite haters, Charles Darwin
12.02.2026 16:31 β π 10350 π 3082 π¬ 162 π 419It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
11.02.2026 17:00 β π 638 π 223 π¬ 30 π 51Not everything can be replicated easily, but a lot can. So replicate where you can and help students learn good research practices, including writing! Folks, it's a win-win. Still sorting out details of the new section but should be online very soon. In the meantime, start replicating!
10.02.2026 19:42 β π 11 π 1 π¬ 1 π 0
How do we know our research results are REAL? We replicate them! Most folks agree but lament on how hard it is to publish these replications.
My dearest gentle reader, lament no more! Delighted to unveil: Replication Studies, a new section of Behavioral Ecology 1/
academic.oup.com/beheco/artic...
Whatβs a multiverse good for anyway? Julia M. Rohrer, Jessica Hullman, and Andrew Gelman Multiverse analysis has become a fairly popular approach, as indicated by the present special issue on the matter. Here, we take one step back and ask why one would conduct a multiverse analysis in the first place. We discuss various ways in which a multiverse may be employed β as a tool for reflection and critique, as a persuasive tool, as a serious inferential tool β as well as potential problems that arise depending on the specific purpose. For example, it fails as a persuasive tool when researchers disagree about which variations should be included in the analysis, and it fails as a serious inferential tool when the included analyses do not target a coherent estimand. Then, we take yet another step back and ask what the multiverse discourse has been good for and whether any broader lessons can be drawn. Ultimately, we conclude that the multiverse does remain a valuable tool; however, we urge against taking it too seriously.
New preprint! So, what's a multiverse analysis good for anyway?>
With @jessicahullman.bsky.social and @statmodeling.bsky.social
juliarohrer.com/wp-content/u...
Popularity of the first name Kenzie correlates with UFO sightings in South Dakota (r=0.939)
31.01.2026 13:30 β π 3 π 2 π¬ 0 π 0Oh dear. Our favourite Fellow of the @royalsociety.org is in the news again
31.01.2026 07:07 β π 128 π 37 π¬ 4 π 2A new study from Anthropic finds that gains in coding efficiency when relying on AI assistance did did not meet statistical significance; AI use noticeably degraded programmersβ understanding of what they were doing. Incredible.
30.01.2026 23:47 β π 1321 π 623 π¬ 35 π 65
If youβre going to use LLMs to help you build and app or code:
1. you need RIGOROUS, rigorous unit testing, and/or
2. you need to already mostly know how to build the thing yourself so you can check and modify
I fear #2 is going to become less common, so letβs be super adamant about #1 π¬
Interested in the TADA! guidelines for improving analytical code sharing & reproducibility? (doi.org/10.32942/X2D...)
Register for @joelpick.bsky.social & @eivimeycook.bsky.social at: libcal.essex.ac.uk/event/4465297
#OpenScience #Reproducibility #OpenResearch
P.S. Give that doggy π a treat!
Thanks to some great comments and suggestions, we've updated TADA!
Read it here: ecoevorxiv.org/repository/v...
Transferable, Available, Documented, Annotated.