Tom Costello's Avatar

Tom Costello

@tomcostello.bsky.social

research psychologist. beliefs, AI, computational social science. prof at american university.

3,704 Followers  |  217 Following  |  150 Posts  |  Joined: 22.09.2023  |  2.508

Latest posts by tomcostello.bsky.social on Bluesky

I’m going to be in Montreal for a few days starting tomorrow for COLM β€” anyone also at the conference / interested in meeting up, let me know!

07.10.2025 21:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is a valid point, I think. The question is always what type of alternative information gathering processes AI chatbots replace. In the case of medical "self diagnosis", there is some reason to believe that common alternative mechanisms aren't superior.

28.07.2025 11:27 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Maybe you see this as all too rosy, which is fair and maybe even true, but warnings and dismissals (alone) are bad tools, if nothing else. future isn't set. So yes, I believe we should actively articulate and defend a positive vision in order to reduce harms + capture gains.

24.07.2025 16:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Targeted ads have gone too far

24.07.2025 16:10 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Also, incentives are not static; if revenue continues to come from usage fees (rather than ads), maybe helping users reach reliable answers is indeed a profitable/competitive approach. open question. plus i don't imagine these big companies want to replay social media era mistakes

24.07.2025 16:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So the problem is incentives. I agree. The incentives are aligned with building the models in the first place, too (hence my first sentence in that quote). Should we not try to identify and bolster a positive vision that underscores potential returns to cooperation, democracy, etc?

24.07.2025 15:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks for sharing!

24.07.2025 15:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Large language models as disrupters of misinformation - Nature Medicine As patients move from WebMD to ChatGPT, Thomas Costello makes the case for cautious optimism.

Thomas Costello argues that as patients move from WebMD to AI, we might be slightly optimism. Unlike former tools, LLMs can synthesize vast, shared knowledge, potentially helping users converge on more accurate beliefs.

The major caveat is: as long as the LLMs are not trained on bad data.

16.07.2025 15:31 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Large language models as disrupters of misinformation Nature Medicine - As patients move from WebMD to ChatGPT, Thomas Costello makes the case for cautious optimism.

Open link here: www.nature.com/articles/s41...

17.07.2025 20:38 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Is there a strong case for AI helping, rather than harming, the accuracy of people's beliefs about contentious topics? In this
@nature.com Nature Medicine piece (focusing on vaccination), I argue the answer is YES. And it boils down to how LLMs differ from other sources of information.

17.07.2025 20:38 β€” πŸ‘ 28    πŸ” 8    πŸ’¬ 3    πŸ“Œ 1
Preview
Large language models as disrupters of misinformation - Nature Medicine As patients move from WebMD to ChatGPT, Thomas Costello makes the case for cautious optimism.

This is an interesting take on LLMs and the RCT in vaccine hesitancy is fascinating.

www.nature.com/articles/s41...

16.07.2025 20:29 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

More on that front soon, actually...

09.07.2025 21:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think this is interesting, and it would be worthwhile to convene a group and expose them to this chatbot interaction (perhaps it would be much less effective when social dynamics are involved), but I think the active ingredient is strong arguments + evidence. LLMs can surface good arguments.

09.07.2025 20:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Conspiracies emerge in the wake of high-profile events, but you can’t debunk them with evidence because little yet exists. Does this mean LLMs can’t debunk conspiracies during ongoing events? No!

We show they can in a new working paper.

PDF: osf.io/preprints/ps...

09.07.2025 16:34 β€” πŸ‘ 51    πŸ” 18    πŸ’¬ 3    πŸ“Œ 2

You mean given everything with the Epstein files?

09.07.2025 19:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm very excited about this new WP showing that LLMs effectively countered conspiracies in the immediate aftermath of the 1st Trump assassination attempt, and that treatment also reduced conspiratorial thinking about the subsequent 2nd assassination attempt

09.07.2025 18:01 β€” πŸ‘ 19    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

yeah, I think that talking to an LLM that is prompted to behave like ChatGPT is likely to amplify whichever tendencies already exist in a person (so can weird beliefs, like has been reported). But our studies give the LLM a very specific goal (e.g., debunking), so it is not 1:1 in a meaningful way

09.07.2025 18:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

We also find this intervention succeeds for vaccine skepticism:

bsky.app/profile/dgra...

09.07.2025 16:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Do these effects succeed for non-conspiracy beliefs, like climate attitudes? yes!

bsky.app/profile/dgra...

09.07.2025 16:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Second, why are debunking dialogues so effective? Good arguments and evidence! (and, for unfolding conspiracies, saying "no one knows what's going on, you should be epistemically cautious" may be a strong argument)

bsky.app/profile/tomc...

09.07.2025 16:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Some other recent papers from our group on AI debunking:

First, does this work if people think they're talking to a human being? yes!

bsky.app/profile/gord...

09.07.2025 16:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Huge thanks to my brilliant co-authors: Nathaniel Rabb (who split the work with me and is co-first author), Nick Stagnaro, @gordpennycook.bsky.social, and @dgrand.bsky.social

We're eager to hear your thoughts and feedback!

09.07.2025 16:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Also, the treatment succeeded for both Democrats and Republicans, who endorsed slightly different conspiratorial explanations of the assassination attempts (see figure below for a breakdown)

09.07.2025 16:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

(the most notable part?): The effect was durable and preventative. When we recontacted participants 2 months later after the second assassination attempt, those from the tx group were ~50% less likely to endorse conspiracies about this new event! The debunking acted as an "inoculation" of sorts.

09.07.2025 16:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Did this work? Yes. The Gemini dialogues significantly reduced conspiracy beliefs compared to controls who chatted about an irrelevant topic or just read a fact sheet (d = .38). The effect was robust across multiple measures.

Key figure attached

09.07.2025 16:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Compared to our other studies (where we had an AI debunk established conspiracy theories), here the LLM used fundamentally different persuasive strategies. Instead of using facts (which aren't available, since no one knew what was going on!), it promoted epistemic caution & critical thinking.

09.07.2025 16:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Just days after the assassination attempt on Trump last year, we recruited Americans who endorsed conspiracies about the attempt, and had them interact with Google's Gemini 1.5 Pro (prompted to debunk).

(We also recontacted them 2 months later, after a 2nd assassination attempt on Trump).

09.07.2025 16:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Conspiracies emerge in the wake of high-profile events, but you can’t debunk them with evidence because little yet exists. Does this mean LLMs can’t debunk conspiracies during ongoing events? No!

We show they can in a new working paper.

PDF: osf.io/preprints/ps...

09.07.2025 16:34 β€” πŸ‘ 51    πŸ” 18    πŸ’¬ 3    πŸ“Œ 2

Here's a bit of spice. Brain research clearly needs to tackle more complexity (than, say, Step 1: simple linear causal chains). But that leaves an ~infinite set of alternatives. Here, @pessoabrain.bsky.social advocates not for just a step 2, but a 3. /1

arxiv.org/abs/2411.03621

02.07.2025 15:39 β€” πŸ‘ 44    πŸ” 18    πŸ’¬ 3    πŸ“Œ 0
Preview
Fast and robust visual object recognition in young children The visual recognition abilities of preschool children rival those of state-of-the-art artificial intelligence models.

My paper with @stellalourenco.bsky.social ‬is now out in Science Advances!

We found that children have robust object recognition abilities that surpass many ANNs. Models only outperformed kids when their training far exceeded what a child could experience in their lifetime

doi.org/10.1126/scia...

02.07.2025 19:38 β€” πŸ‘ 105    πŸ” 36    πŸ’¬ 2    πŸ“Œ 2

@tomcostello is following 19 prominent accounts