Ben Tappin's Avatar

Ben Tappin

@benmtappin.bsky.social

• Assistant professor, London School of Economics and Political Science • Persuasion, technology, experiments • benmtappin.com

2,169 Followers  |  415 Following  |  70 Posts  |  Joined: 18.09.2023
Posts Following

Posts by Ben Tappin (@benmtappin.bsky.social)

Post image

New newspaper headline for your Intro to Causal Inference lecture just dropped

27.02.2026 12:57 — 👍 122    🔁 26    💬 3    📌 3
“I don’t like tibbles”, “pipes are overrated”, “loops are great actually”, … what’s next?

Ah yes — the R contrarian character arc 😄
If we continue the escalation, the next stages usually look like:
	•	“Actually, base R plotting is more transparent.”
	•	“data.table was right all along.”
	•	“I write my own S3 methods for fun.”
	•	“Vectorization is just premature optimization avoidance.”
	•	“apply() is perfectly readable.”
	•	“Why depend on 40 packages for a left join?”
	•	“I don’t trust NSE.”
	•	“Magrittr changed evaluation semantics and nobody talks about it.”
	•	“One well-written for-loop beats clever code.”
	•	“I opened the tidyverse source code once and never recovered.”

“I don’t like tibbles”, “pipes are overrated”, “loops are great actually”, … what’s next? Ah yes — the R contrarian character arc 😄 If we continue the escalation, the next stages usually look like: • “Actually, base R plotting is more transparent.” • “data.table was right all along.” • “I write my own S3 methods for fun.” • “Vectorization is just premature optimization avoidance.” • “apply() is perfectly readable.” • “Why depend on 40 packages for a left join?” • “I don’t trust NSE.” • “Magrittr changed evaluation semantics and nobody talks about it.” • “One well-written for-loop beats clever code.” • “I opened the tidyverse source code once and never recovered.”

Planning my next conversation starter

26.02.2026 18:06 — 👍 100    🔁 11    💬 22    📌 7

I too was very glad to see this! But I feel like the whole episode bodes badly for the future. It’s not sustainable to rely on the CEO of a private company to act against their financial self-interest in order to curtail high-risk AI deployment (here mass surveillance and fully autonomous weapons).

27.02.2026 08:37 — 👍 3    🔁 0    💬 1    📌 0
Post image

🔔 “How real is the LLM threat to online research in academia?” will be live today.

Experts from Microsoft Research, MIT / Stanford, Max Planck Institute, and Prolific discuss the threat of agentic AI to online research, and how to protect against it.

Link to join live below. #AcademicSky #Research

26.02.2026 16:05 — 👍 7    🔁 3    💬 1    📌 0

Reposting for visibility. Many researchers still appear oblivious to this fact, which is terrifying! It should be included in every experiment design 101.

26.02.2026 08:41 — 👍 7    🔁 0    💬 0    📌 0

When I pitch academics on my paper on nulls one common and understandable reaction is "but they're probably noisy and thus uninformative nulls." This is true, but it misses the key realization that WE PUBLISH THE RESULT WHEN THE NOISY TEST IS P<0.05.

11.02.2026 17:37 — 👍 36    🔁 4    💬 1    📌 2
It must be very hard to publish null results
Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.

11.02.2026 17:00 — 👍 638    🔁 223    💬 30    📌 51
Preview
How “95%” escaped into the world – and why so many believed it Challenging sloppy thinking

@benmtappin.bsky.social I just was pointed to this which is much more thorough and arrives at the same conclusion: www.exponentialview.co/p/how-95-esc...

The "95% fail" number is essentially meaningless

11.02.2026 11:42 — 👍 1    🔁 1    💬 0    📌 0

Felix and friends looking closely at the details so you don’t have to 👌👇

11.02.2026 10:10 — 👍 2    🔁 0    💬 1    📌 0
Post image

A short note on questionable AI studies or why friends don’t let friends make %-claims based on small-n qualitative research interview reports

New week, new AI newsletter from Marina and myself here at RISJ: buff.ly/ckaUSn9

11.02.2026 10:06 — 👍 5    🔁 4    💬 1    📌 3

Excited to dig into this! Thanks for the work Luc and team. Quick question: what’s happening with the y axis labels in figure 2 (0-20-80-60 etc.)? At first I thought I was misunderstanding something about your measurement, but I can’t see where. Are they just typos or what?

10.02.2026 14:11 — 👍 0    🔁 0    💬 1    📌 0
Post image

It’s that time of year again

07.02.2026 17:14 — 👍 1    🔁 0    💬 0    📌 0
Post image

Feeling seen 👀

06.02.2026 22:03 — 👍 1    🔁 0    💬 0    📌 0

Thanks Kevin! 🙏

06.02.2026 18:19 — 👍 1    🔁 0    💬 0    📌 0
Post image

Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues

onlinelibrary.wiley.com/doi/10.1111/...

06.02.2026 14:14 — 👍 6    🔁 6    💬 2    📌 0
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

03.02.2026 21:55 — 👍 118    🔁 48    💬 2    📌 3
Preview
PhD Scholarship

My Centre is a unique place to do a PhD in Philosophy, because you can be in constant contact with experts in veterinary medicine, psychology, zoology and policy and be part of a team united by a shared interest in animal minds. We now have our 1st ever PhD scholarship: www.lse.ac.uk/sentience/phd

30.01.2026 07:39 — 👍 142    🔁 77    💬 1    📌 6
Postdoctoral Research Fellow in Quantitative Political Behaviour and Political Economy:Whiteknights Reading UK The closing date for applications is 23.59 on 22nd February 2026

📢 JOB ALERT! Postdoc opportunity in Political Behaviour & Political Economy (UKRI‑funded) - Please do share with anyone who might be a great fit. If you’re interested or would like to know more, please feel free to get in touch!! jobs.reading.ac.uk/Job/JobDetai...

26.01.2026 16:37 — 👍 30    🔁 31    💬 0    📌 1
Call for Proposals: Data Collection for
Replication+Novel Political Science Survey Experiments
Alexander Coppock and Mary McGrath
January 27, 2026
We invite proposals for a survey experiment replication+novel design competition. Se-
lected replication+novel design survey experiments will be conducted on large samples of
American respondents, quota sampled to match U.S. Census margins and filtered for quality
and attention by the survey sample provider Rep Data (repdata.com).
Each proposal consists of two parts: (1) a replication study of an existing, previously
published survey experiment, and (2) a novel experimental design on a topic of the authors’
choosing.
The replication studies and reanalyses of the existing studies will be combined into a
meta-paper to be co-authored by all authors of accepted proposals along with the princi-
pal investigators (Coppock and McGrath). As a condition for acceptance, authors commit
to sharing the data and producing a write-up of the findings from their novel design for
submission to a scholarly journal, and public posting of a working paper pre-publication.

Call for Proposals: Data Collection for Replication+Novel Political Science Survey Experiments Alexander Coppock and Mary McGrath January 27, 2026 We invite proposals for a survey experiment replication+novel design competition. Se- lected replication+novel design survey experiments will be conducted on large samples of American respondents, quota sampled to match U.S. Census margins and filtered for quality and attention by the survey sample provider Rep Data (repdata.com). Each proposal consists of two parts: (1) a replication study of an existing, previously published survey experiment, and (2) a novel experimental design on a topic of the authors’ choosing. The replication studies and reanalyses of the existing studies will be combined into a meta-paper to be co-authored by all authors of accepted proposals along with the princi- pal investigators (Coppock and McGrath). As a condition for acceptance, authors commit to sharing the data and producing a write-up of the findings from their novel design for submission to a scholarly journal, and public posting of a working paper pre-publication.

🎺 Call for proposals 🎺

1️⃣ replicate an existing experiment
2️⃣ run a novel experiment

on repdata.com

3️⃣ coauthor with Mary McGrath and me to meta-analyze the replications and existing studies
4️⃣ publish your study

details: alexandercoppock.com/replication_...
applications open Feb 1

please repost!

27.01.2026 22:16 — 👍 77    🔁 70    💬 0    📌 3

I land somewhere between this and the OP. Evaluating the quality of the methods often requires fully understanding the research question and estimand. And that usually requires reading the intro. (Disclaimer: but even then it’s no guarantee😭 cf. www.the100.ci/2024/08/27/l... @dingdingpeng.the100.ci)

26.01.2026 19:57 — 👍 3    🔁 0    💬 0    📌 0
Redirecting

Here’s a recent review of the evidence: doi.org/10.1177/1745...

And a classic (though not strictly political beliefs): doi.org/10.1006/obhd...

18.01.2026 08:49 — 👍 2    🔁 0    💬 1    📌 0
Post image

My students are in for a treat next week

16.01.2026 14:36 — 👍 2    🔁 0    💬 0    📌 0
Preview
For Digital Mass Persuasion, Exposure Matters More Than Persuasiveness If you are interested in understanding the mass persuasive impact of digital media content you should generally pay more attention to content exposure than to its persuasiveness.

A very interesting data-rich analysis of persuasion on digital media by @benmtappin.bsky.social. Recommend!

open.substack.com/pub/benmtapp...

22.12.2025 13:08 — 👍 15    🔁 4    💬 0    📌 0
Post image Post image

🚨 New in Nature+Science!🚨
AI chatbots can shift voter attitudes on candidates & policies, often by 10+pp
🔹Exps in US Canada Poland & UK
🔹More “facts”→more persuasion (not psych tricks)
🔹Increasing persuasiveness reduces "fact" accuracy
🔹Right-leaning bots=more inaccurate

04.12.2025 20:42 — 👍 167    🔁 70    💬 2    📌 3
Post image

🚨 New working paper 🚨

We often see populist parties like Reform UK blame higher energy bills on climate change policies. What are the political consequences of this strategy?

Very early draft; comments and criticisms are welcomed!

full draft: z-dickson.github.io/assets/dicks...

18.11.2025 15:39 — 👍 48    🔁 17    💬 3    📌 0
Public Opinion Analytics Lab The website of the Public Opinion Analytics Lab

"While testing one dimension at a time can yield simple results, those effects may not generalise to richer, real-world contexts."

Read our new POAL Methods Briefs on Conjoint Experiments from Thomas Robinson!

Link: www.poal.co.uk/research/met...

10.11.2025 08:47 — 👍 12    🔁 10    💬 0    📌 2
Preview
The Intelligence Curse This series examines the incoming crisis of human irrelevance and provides a map towards a future where people remain the masters of their destiny.

Insightful long-read:
"With AGI [artificial general intelligence], powerful actors will lose their incentive to invest in regular people–just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor."
intelligence-curse.ai

12.10.2025 10:44 — 👍 6    🔁 1    💬 0    📌 0
Preview
AI is persuasive, but that’s not the real problem for democracy Opinion: Felix M Simon argues that AI is unlikely to significantly shape election results in the near future, but warns that it could damage democracy through a steady erosion of institutional trust.

🗞️ 🤖 Weekend reading anyone? For the launch of @transformernews.ai as a standalone publication, they invited me to contribute a piece on what persuasive AI might mean for democracy and elections.

Here’s the result…

buff.ly/OJsNmpK

03.10.2025 17:19 — 👍 5    🔁 8    💬 1    📌 0

WE ARE HIRING! 2 Lecturers in Quantitative Social Science. Want a friendly interdisciplinary department in one of the world's most vibrant cities? This just might be for you.

Apply by: 10 Oct

www.ucl.ac.uk/work-at-ucl/...

01.09.2025 13:59 — 👍 149    🔁 156    💬 3    📌 9

Wrote about how the UK's online age verification requirements are already proving to be a disaster (as UK regulators were clearly warned it would be) and how unhelpful the UK's response to this mess has been, including their tech minister saying anyone who complains supports predators.

05.08.2025 00:31 — 👍 805    🔁 383    💬 6    📌 17