Iβm going to be in Montreal for a few days starting tomorrow for COLM β anyone also at the conference / interested in meeting up, let me know!
07.10.2025 21:14 β π 0 π 0 π¬ 0 π 0@tomcostello.bsky.social
research psychologist. beliefs, AI, computational social science. prof at american university.
Iβm going to be in Montreal for a few days starting tomorrow for COLM β anyone also at the conference / interested in meeting up, let me know!
07.10.2025 21:14 β π 0 π 0 π¬ 0 π 0This is a valid point, I think. The question is always what type of alternative information gathering processes AI chatbots replace. In the case of medical "self diagnosis", there is some reason to believe that common alternative mechanisms aren't superior.
28.07.2025 11:27 β π 5 π 1 π¬ 1 π 0Maybe you see this as all too rosy, which is fair and maybe even true, but warnings and dismissals (alone) are bad tools, if nothing else. future isn't set. So yes, I believe we should actively articulate and defend a positive vision in order to reduce harms + capture gains.
24.07.2025 16:16 β π 2 π 0 π¬ 0 π 0Targeted ads have gone too far
24.07.2025 16:10 β π 5 π 0 π¬ 0 π 0Also, incentives are not static; if revenue continues to come from usage fees (rather than ads), maybe helping users reach reliable answers is indeed a profitable/competitive approach. open question. plus i don't imagine these big companies want to replay social media era mistakes
24.07.2025 16:01 β π 2 π 0 π¬ 1 π 0So the problem is incentives. I agree. The incentives are aligned with building the models in the first place, too (hence my first sentence in that quote). Should we not try to identify and bolster a positive vision that underscores potential returns to cooperation, democracy, etc?
24.07.2025 15:51 β π 3 π 0 π¬ 1 π 0Thanks for sharing!
24.07.2025 15:37 β π 1 π 0 π¬ 0 π 0Thomas Costello argues that as patients move from WebMD to AI, we might be slightly optimism. Unlike former tools, LLMs can synthesize vast, shared knowledge, potentially helping users converge on more accurate beliefs.
The major caveat is: as long as the LLMs are not trained on bad data.
Open link here: www.nature.com/articles/s41...
17.07.2025 20:38 β π 5 π 2 π¬ 0 π 0Is there a strong case for AI helping, rather than harming, the accuracy of people's beliefs about contentious topics? In this
@nature.com Nature Medicine piece (focusing on vaccination), I argue the answer is YES. And it boils down to how LLMs differ from other sources of information.
This is an interesting take on LLMs and the RCT in vaccine hesitancy is fascinating.
www.nature.com/articles/s41...
More on that front soon, actually...
09.07.2025 21:25 β π 0 π 0 π¬ 0 π 0I think this is interesting, and it would be worthwhile to convene a group and expose them to this chatbot interaction (perhaps it would be much less effective when social dynamics are involved), but I think the active ingredient is strong arguments + evidence. LLMs can surface good arguments.
09.07.2025 20:35 β π 0 π 0 π¬ 0 π 0Conspiracies emerge in the wake of high-profile events, but you canβt debunk them with evidence because little yet exists. Does this mean LLMs canβt debunk conspiracies during ongoing events? No!
We show they can in a new working paper.
PDF: osf.io/preprints/ps...
You mean given everything with the Epstein files?
09.07.2025 19:03 β π 0 π 0 π¬ 1 π 0I'm very excited about this new WP showing that LLMs effectively countered conspiracies in the immediate aftermath of the 1st Trump assassination attempt, and that treatment also reduced conspiratorial thinking about the subsequent 2nd assassination attempt
09.07.2025 18:01 β π 19 π 6 π¬ 0 π 0yeah, I think that talking to an LLM that is prompted to behave like ChatGPT is likely to amplify whichever tendencies already exist in a person (so can weird beliefs, like has been reported). But our studies give the LLM a very specific goal (e.g., debunking), so it is not 1:1 in a meaningful way
09.07.2025 18:46 β π 2 π 0 π¬ 2 π 0We also find this intervention succeeds for vaccine skepticism:
bsky.app/profile/dgra...
Do these effects succeed for non-conspiracy beliefs, like climate attitudes? yes!
bsky.app/profile/dgra...
Second, why are debunking dialogues so effective? Good arguments and evidence! (and, for unfolding conspiracies, saying "no one knows what's going on, you should be epistemically cautious" may be a strong argument)
bsky.app/profile/tomc...
Some other recent papers from our group on AI debunking:
First, does this work if people think they're talking to a human being? yes!
bsky.app/profile/gord...
Huge thanks to my brilliant co-authors: Nathaniel Rabb (who split the work with me and is co-first author), Nick Stagnaro, @gordpennycook.bsky.social, and @dgrand.bsky.social
We're eager to hear your thoughts and feedback!
Also, the treatment succeeded for both Democrats and Republicans, who endorsed slightly different conspiratorial explanations of the assassination attempts (see figure below for a breakdown)
09.07.2025 16:34 β π 2 π 0 π¬ 1 π 0(the most notable part?): The effect was durable and preventative. When we recontacted participants 2 months later after the second assassination attempt, those from the tx group were ~50% less likely to endorse conspiracies about this new event! The debunking acted as an "inoculation" of sorts.
09.07.2025 16:34 β π 2 π 0 π¬ 1 π 0Did this work? Yes. The Gemini dialogues significantly reduced conspiracy beliefs compared to controls who chatted about an irrelevant topic or just read a fact sheet (d = .38). The effect was robust across multiple measures.
Key figure attached
Compared to our other studies (where we had an AI debunk established conspiracy theories), here the LLM used fundamentally different persuasive strategies. Instead of using facts (which aren't available, since no one knew what was going on!), it promoted epistemic caution & critical thinking.
09.07.2025 16:34 β π 3 π 0 π¬ 1 π 0Just days after the assassination attempt on Trump last year, we recruited Americans who endorsed conspiracies about the attempt, and had them interact with Google's Gemini 1.5 Pro (prompted to debunk).
(We also recontacted them 2 months later, after a 2nd assassination attempt on Trump).
Conspiracies emerge in the wake of high-profile events, but you canβt debunk them with evidence because little yet exists. Does this mean LLMs canβt debunk conspiracies during ongoing events? No!
We show they can in a new working paper.
PDF: osf.io/preprints/ps...
Here's a bit of spice. Brain research clearly needs to tackle more complexity (than, say, Step 1: simple linear causal chains). But that leaves an ~infinite set of alternatives. Here, @pessoabrain.bsky.social advocates not for just a step 2, but a 3. /1
arxiv.org/abs/2411.03621
My paper with @stellalourenco.bsky.social β¬is now out in Science Advances!
We found that children have robust object recognition abilities that surpass many ANNs. Models only outperformed kids when their training far exceeded what a child could experience in their lifetime
doi.org/10.1126/scia...