Joel Pick's Avatar

Joel Pick

@joelpick.bsky.social

Evolutionary ecologist and open science advocate. Interested in social evolution, population dynamics, statistics, and open science. Parent x2. Incompetent but enthusiastic naturalist

961 Followers  |  480 Following  |  52 Posts  |  Joined: 17.11.2023
Posts Following

Posts by Joel Pick (@joelpick.bsky.social)

Yeah, I really can't stress enough that AI code assistants royally fuck up statistical analyses. And they do it with absolute confidence.

28.02.2026 17:38 β€” πŸ‘ 26    πŸ” 4    πŸ’¬ 1    πŸ“Œ 2
Video thumbnail

Given the recent spicy #rstats threads, an important reminder:

There's no single right way to code

If it works for you, that's all that matters really

Be good to your fellow coders

27.02.2026 09:47 β€” πŸ‘ 54    πŸ” 8    πŸ’¬ 7    πŸ“Œ 3
Preview
Article - Nuffield Politics Research Centre

@martamiori.bsky.social and I have been writing, since 2024, about why Labour's 'Reform' challenge and emphasis was based on a misunderstanding of Labour's vote. Here for anyone interested: politicscentre.nuffield.ox.ac.uk/news-and-eve...

27.02.2026 08:02 β€” πŸ‘ 113    πŸ” 61    πŸ’¬ 2    πŸ“Œ 16
Post image

Someone should make a Simpson's meme for maximum relevance. In the meantime:

26.02.2026 04:41 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

This recent RCT of an "AI stethoscope" claims the technology "shows promise" for diagnosing cardiovascular conditions.

It does not.

It is a textbook example of the risks of conducting unprincipled 'per protocol analyses'. Once again, peer review at a major medical journal has failed.

🧡 1/

25.02.2026 16:44 β€” πŸ‘ 411    πŸ” 184    πŸ’¬ 7    πŸ“Œ 31

We're a little late for #LoveData26 but it's never too late to introduce our new data editors! These folks will guide our authors' compliance with ESA's Open Research Policy.

24.02.2026 14:16 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1

Now recommended on @peercommunityin.bsky.social!

ecology.peercommunityin.org/articles/rec...

24.02.2026 09:29 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

TADA! Guidelines to Improve Code Sharing by @joelpick.bsky.social @eivimeycook.bsky.social et al!

bsky.app/profile/eivi...

24.02.2026 14:48 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“£ We are looking for others to replicate our work!

πŸ”— More information: nanobubbles.hypotheses.org/replication-...

12.02.2026 08:46 β€” πŸ‘ 6    πŸ” 9    πŸ’¬ 0    πŸ“Œ 1

What could this look like? Some potential guidelines:

1. Try to use no more than 100 words in the whole poster (outside of text inside of tables and figures).

2. Make the figures giant.

3. Present *way less information.*

4. Ask a question on the poster to engage the audience.

19.02.2026 18:53 β€” πŸ‘ 27    πŸ” 5    πŸ’¬ 3    πŸ“Œ 1

With everything going on in the world, what better time than now to re-up my thoughts (that is, rant) about scientific posters? (Thread!)

19.02.2026 18:48 β€” πŸ‘ 40    πŸ” 14    πŸ’¬ 2    πŸ“Œ 6
Post image

Sian Henley and I have just had a NERC grant funded, looking at variation in Souther Ocean diatom traits under fluctuating light regimes. If you are a fearless experimentalist interested in working with us as a postdoc on this 3 year project, come chat with me at Ocean Sciences in Glasgow next week!

19.02.2026 10:40 β€” πŸ‘ 11    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

Led by my talented PhD student @justine-armg.bsky.social we’re running a #meta-analysis of #cross-sex-genetic-correlations in fitness components.

If you have unpublished data or know of studies that might not appear in a systematic search, please reach out, we’d love to include them!

Please share!

19.02.2026 13:50 β€” πŸ‘ 13    πŸ” 20    πŸ’¬ 0    πŸ“Œ 1
Post image

I just did the dumbest thing of my entire career to prove a much more serious point.

I tricked ChatGPT and Google, and made them tell other users I’m a competitive hot-dog-eating world champion

People are using this trick on a massive scale to make AI tell you lies. I’ll explain how I did it

18.02.2026 16:37 β€” πŸ‘ 4806    πŸ” 2116    πŸ’¬ 85    πŸ“Œ 298
Post image

πŸ“’πŸ“’πŸ“’Lectureships at Bristol!πŸ“’πŸ“’πŸ“’

We're hiring 3 x lecturers (=assistant professor) in Biological Sciences, across the discipline.

Great department, great colleagues, great building, great city

Details here:
www.bristol.ac.uk/jobs/find/de...

18.02.2026 08:16 β€” πŸ‘ 50    πŸ” 82    πŸ’¬ 0    πŸ“Œ 0

This is particularly weird because most of Open Science boils down to: just be honest. Write down in your paper what you actually did. Seems a rather low bar for any science, but apparently pointing this out touches upon a very sensitive issue for some fellow academics.

19.02.2026 05:41 β€” πŸ‘ 31    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0

This highlights important differences.
If the LaTeX code is wrong, it'll (nearly always) be obvious. Garbled table -> try again.

And ultimately, it doesn't matter in the same way. Nice docs are nice, but people don't die from misformated bullet point

If analysis code is wrong, it's not obvious.

16.02.2026 11:32 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.

13.02.2026 16:50 β€” πŸ‘ 97    πŸ” 52    πŸ’¬ 6    πŸ“Œ 7
Preview
Replication studies: a win-win for early-career training and behavioral ecology Replicating previous research builds confidence that results are real and meaningful. But close replications are rare due to limitations in resources and d

Looks like a cool idea @katelaskowski.bsky.social @joelpick.bsky.social

Replication studies: a win-win for early-career training and behavioral ecology url: academic.oup.com/beheco/artic...

13.02.2026 17:25 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

Happy birthday to one of my favourite haters, Charles Darwin

12.02.2026 16:31 β€” πŸ‘ 10350    πŸ” 3082    πŸ’¬ 162    πŸ“Œ 419
It must be very hard to publish null results
Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.

11.02.2026 17:00 β€” πŸ‘ 638    πŸ” 223    πŸ’¬ 30    πŸ“Œ 51
Preview
a red neon sign that says replicate things on it ALT: a red neon sign that says replicate things on it

Not everything can be replicated easily, but a lot can. So replicate where you can and help students learn good research practices, including writing! Folks, it's a win-win. Still sorting out details of the new section but should be online very soon. In the meantime, start replicating!

10.02.2026 19:42 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Replication studies: a win-win for early-career training and behavioral ecology Replicating previous research builds confidence that results are real and meaningful. But close replications are rare due to limitations in resources and d

How do we know our research results are REAL? We replicate them! Most folks agree but lament on how hard it is to publish these replications.

My dearest gentle reader, lament no more! Delighted to unveil: Replication Studies, a new section of Behavioral Ecology 1/

academic.oup.com/beheco/artic...

10.02.2026 19:42 β€” πŸ‘ 210    πŸ” 107    πŸ’¬ 4    πŸ“Œ 8
What’s a multiverse good for anyway?

Julia M. Rohrer, Jessica Hullman, and  Andrew Gelman

Multiverse analysis has become a fairly popular approach, as indicated by the present special issue on the matter. Here, we take one step back and ask why one would conduct a multiverse analysis in the first place. We discuss various ways in which a multiverse may be employed – as a tool for reflection and critique, as a persuasive tool, as a serious inferential tool – as well as potential problems that arise depending on the specific purpose. For example, it fails as a persuasive tool when researchers disagree about which variations should be included in the analysis, and it fails as a serious inferential tool when the included analyses do not target a coherent estimand. Then, we take yet another step back and ask what the multiverse discourse has been good for and whether any broader lessons can be drawn. Ultimately, we conclude that the multiverse does remain a valuable tool; however, we urge against taking it too seriously.

What’s a multiverse good for anyway? Julia M. Rohrer, Jessica Hullman, and Andrew Gelman Multiverse analysis has become a fairly popular approach, as indicated by the present special issue on the matter. Here, we take one step back and ask why one would conduct a multiverse analysis in the first place. We discuss various ways in which a multiverse may be employed – as a tool for reflection and critique, as a persuasive tool, as a serious inferential tool – as well as potential problems that arise depending on the specific purpose. For example, it fails as a persuasive tool when researchers disagree about which variations should be included in the analysis, and it fails as a serious inferential tool when the included analyses do not target a coherent estimand. Then, we take yet another step back and ask what the multiverse discourse has been good for and whether any broader lessons can be drawn. Ultimately, we conclude that the multiverse does remain a valuable tool; however, we urge against taking it too seriously.

New preprint! So, what's a multiverse analysis good for anyway?>

With @jessicahullman.bsky.social and @statmodeling.bsky.social

juliarohrer.com/wp-content/u...

04.02.2026 10:24 β€” πŸ‘ 173    πŸ” 52    πŸ’¬ 9    πŸ“Œ 3
Post image

Popularity of the first name Kenzie correlates with UFO sightings in South Dakota (r=0.939)

31.01.2026 13:30 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Oh dear. Our favourite Fellow of the @royalsociety.org is in the news again

31.01.2026 07:07 β€” πŸ‘ 128    πŸ” 37    πŸ’¬ 4    πŸ“Œ 2

A new study from Anthropic finds that gains in coding efficiency when relying on AI assistance did did not meet statistical significance; AI use noticeably degraded programmers’ understanding of what they were doing. Incredible.

30.01.2026 23:47 β€” πŸ‘ 1321    πŸ” 623    πŸ’¬ 35    πŸ“Œ 65

If you’re going to use LLMs to help you build and app or code:

1. you need RIGOROUS, rigorous unit testing, and/or

2. you need to already mostly know how to build the thing yourself so you can check and modify

I fear #2 is going to become less common, so let’s be super adamant about #1 😬

30.01.2026 07:00 β€” πŸ‘ 53    πŸ” 6    πŸ’¬ 5    πŸ“Œ 0
Post image

Interested in the TADA! guidelines for improving analytical code sharing & reproducibility? (doi.org/10.32942/X2D...)

Register for @joelpick.bsky.social & @eivimeycook.bsky.social at: libcal.essex.ac.uk/event/4465297

#OpenScience #Reproducibility #OpenResearch

P.S. Give that doggy πŸ‘‡ a treat!

29.01.2026 07:04 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Thanks to some great comments and suggestions, we've updated TADA!

Read it here: ecoevorxiv.org/repository/v...

Transferable, Available, Documented, Annotated.

28.01.2026 08:20 β€” πŸ‘ 14    πŸ” 11    πŸ’¬ 1    πŸ“Œ 0