Tom Costello's Avatar

Tom Costello

@tomcostello.bsky.social

research psychologist. beliefs, AI, computational social science. assistant prof at Carnegie Mellon

3,724 Followers  |  224 Following  |  167 Posts  |  Joined: 22.09.2023  |  2.6241

Latest posts by tomcostello.bsky.social on Bluesky

Preview
Can speed cameras make streets safer? Quasi-experimental evidence from New York City | PNAS Each year, approximately 40,000 people die in vehicle collisions in the United States, generating $340 billion in economic costs. To make roads saf...

Our new study provides rare causal evidence about NYC’s speed camera program. We find large reductions in collisions (30%) and injuries (16%) near intersections with cameras. www.pnas.org/doi/abs/10.1... @astagoff.bsky.social ky.social @brendenbeck.bsky.social nbeck.bsky.social πŸ§ͺ

08.12.2025 20:08 β€” πŸ‘ 436    πŸ” 166    πŸ’¬ 7    πŸ“Œ 28

I should also say that lies can ofc be persuasive if they seem true and you don’t check them. AI can indeed fool people with lies in certain contexts. But that’s why we need tools and interventions that help people think critically and discern lies from truth without much cognitive effort.

06.12.2025 17:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

IMO, if we want to live in a democracy, we ought to embrace persuasion.

When we treat all AI info as bad we are implicitly arguing that voters are incapable of processing arguments.

This is an illiberal stance.

democracy relies on the "unforced force of the better argument." (Habermas)

06.12.2025 16:39 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Final thing - AI doesn't appreciably lower the cost of lies; lies were already quite cheap.

But it does indeed lower the cost of high-quality, truthful communication.

06.12.2025 16:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

voters are not blank slates waiting to be reprogrammed. They can think critically…that’s what these persuasion studies are clearly showing, now.

06.12.2025 16:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

IMO, if we want to live in a democracy, we ought to embrace persuasion.

When we treat all AI info as bad we are implicitly arguing that voters are incapable of processing arguments.

This is an illiberal stance.

democracy relies on the "unforced force of the better argument." (Habermas)

06.12.2025 16:39 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

That literally happened in response to our paper the nature news piece is about!

Breitbart covered it and explicitly linked to LLM’s β€œleftist bias”!

That seems bad. it means conservatives will trust AI-provided information (which is largely accurate) less.

www.breitbart.com/politics/202...

06.12.2025 16:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

AI is not fooling everyone with lies. people are not that stupid.

but if we think AI is fooling everyone with lies, that’s p bad. Could even lead to a "liar's dividend," where politicians can dismiss real evidence/args as "just AI," and voters disengage.

06.12.2025 16:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI as Governance Political scientists have had remarkably little to say about artificial intelligence (AI), perhaps because they are dissuaded by its technical complexity and by current debates about whether AI might ...

AI can be a way to access new and reliable information. That seems good? Especially at micro lvl, bc people are reasonable. But ofc it gets complex at macro.

Henry Farrell has had some excellent perspectives on this, see

www.annualreviews.org/content/jour...

And www.science.org/doi/10.1126/...

06.12.2025 16:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI misinformation may have paradoxical consequences To understand why, consider the side-blotched lizard

But see this recent economist piece for some reasonable predictions. Market for lemons redux.

economist.com/finance-and-economics/2025/12/04/ai-misinformation-may-have-paradoxical-consequences

06.12.2025 16:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For two, society is not static; it adapts to new things. we learned that photos can be airbrushed, we are learning that audio can be cloned, etc.

the second and third order impacts of these changes might be weird and hard to anticipate.

06.12.2025 16:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For one, AI generates content, but it cannot force uptake. For AI persuasion to work, it has to win attention.

In a high-choice media environment, it is incredibly hard to get a target to even see a specific ad, let alone process it.

06.12.2025 16:39 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Any headline that ends in a question mark can be answered by the word β€œno” (Betteridge's law).

Should we worry about gen AI persuasion shaping elections?

I think β€œno” holds here.

See knightcolumbia.org/content/dont... for a good summary of why we shouldn’t be too concerned

06.12.2025 16:39 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

I agree, 19% is bad! but in this case, the job of the AI is basically political ad maker and/or lawyer and/or pundit...if you did an in-depth fact check on the informational claims made by those groups, I kinda suspect it'd be worse than 19%. could be wrong, though.

05.12.2025 18:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Key finding imo is that information-dense argumentation was driver of persuasion, which is an optimistic result in many ways! Humans respond to info.

IF we can make AI mostly tell the truth, persuasion might be a net good (conditional on second-order effects on info ecosystem etc cooperating)

05.12.2025 18:07 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

further, in this @nature.com paper (rdcu.be/eTcbQ) on changing vote-choice w/ AI

we fact-checked >8,000 statements and found most were broadly accurate (but AIs arguing for right-leaning candidates made more significantly inaccurate claims)

05.12.2025 18:05 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I encourage folks to read the actual paper β€” the models were prompted to change opinions…not to be accurate. Humans also use inaccurate information.

But the AIs were still reasonably accurate (Avg 77/100), with no evidence inaccurate claims are more persuasive.

05.12.2025 18:00 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 2    πŸ“Œ 1
Preview
Voters’ minds are hard to change. AI chatbots are surprisingly good at it. New research suggests AI chatbots can shift people’s political views more effectively than campaign ads on TV.

New: AI chatbots can change voters' minds, according to a pair of in-depth studies published just now in Science and Nature.

How they do it is interesting β€” and concerning. Gift link: wapo.st/49RSstP

04.12.2025 19:07 β€” πŸ‘ 171    πŸ” 83    πŸ’¬ 9    πŸ“Œ 32

Very excited this paper is live!

Congratulations to first author Hause Lin for being such a badass researcher and all around cool guy

04.12.2025 22:34 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

New paper in press at JPSP! An adversarial collaboration focusing on a large-scale test of how strongly implicit racial attitudes predict discriminatory behavior. Pre-print here: osf.io/preprints/ps...

02.12.2025 14:13 β€” πŸ‘ 121    πŸ” 55    πŸ’¬ 7    πŸ“Œ 11
Preview
From Extrinsic to Intrinsic Motivation: Testing an AI-powered Motivational Interviewing System to Foster Prosocial Motivation Scalable interventions promoting sustained behavioral change are crucial for addressing societal issues, yet traditional approaches often require inte…

Excited to share work led by Conrado Eiroa-Solans, a PhD student at UC Berkeley who I first met when he was applying to grad school. His new paper is out in Computers in Human Behavior Reports. /1

28.11.2025 01:32 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Using conversational AI to reduce science skepticism Mistrust of the scientific consensus around issues such as climate change and vaccination is mainstream, compromising our ability to respond to existe…

Cool review of conversational AI interventions for reducing misbeliefs !

www.sciencedirect.com/science/arti...

β€œLLMs have clear communication advantages: they are highly accessible, interactive, conversationally engaging, multi-lingual, and responsive to individuals’ unique information needs.”

21.11.2025 16:11 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Y'all. N>3,800. !!!!!!!

Goodness gracious.

12.11.2025 22:49 β€” πŸ‘ 60    πŸ” 26    πŸ’¬ 6    πŸ“Œ 2

I’m going to be in Montreal for a few days starting tomorrow for COLM β€” anyone also at the conference / interested in meeting up, let me know!

07.10.2025 21:14 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is a valid point, I think. The question is always what type of alternative information gathering processes AI chatbots replace. In the case of medical "self diagnosis", there is some reason to believe that common alternative mechanisms aren't superior.

28.07.2025 11:27 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Maybe you see this as all too rosy, which is fair and maybe even true, but warnings and dismissals (alone) are bad tools, if nothing else. future isn't set. So yes, I believe we should actively articulate and defend a positive vision in order to reduce harms + capture gains.

24.07.2025 16:16 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Targeted ads have gone too far

24.07.2025 16:10 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Also, incentives are not static; if revenue continues to come from usage fees (rather than ads), maybe helping users reach reliable answers is indeed a profitable/competitive approach. open question. plus i don't imagine these big companies want to replay social media era mistakes

24.07.2025 16:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So the problem is incentives. I agree. The incentives are aligned with building the models in the first place, too (hence my first sentence in that quote). Should we not try to identify and bolster a positive vision that underscores potential returns to cooperation, democracy, etc?

24.07.2025 15:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@tomcostello is following 19 prominent accounts