Tom Costello's Avatar

Tom Costello

@tomcostello.bsky.social

research psychologist. beliefs, AI, computational social science. assistant prof at Carnegie Mellon

3,808 Followers  |  241 Following  |  198 Posts  |  Joined: 22.09.2023
Posts Following

Posts by Tom Costello (@tomcostello.bsky.social)

But I agree collapse is a possibility and the epigraph nods to that

05.03.2026 23:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

interesting, thanks! very cool paper. we are not proposing that anyone alter scientific practice for short-term profits. i think science is different from social media case? also you argue that inaction is a choice that cedes control to profit-driven actors -- i think same applies to AI.

05.03.2026 23:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Hey thanks a ton Daryl!! We were pleased with how it turned out

05.03.2026 18:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks man! Yes me too...

05.03.2026 17:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Large language models can effectively convince people to believe conspiracies Large language models (LLMs) have been shown to be persuasive across a variety of contexts. But it remains unclear whether this persuasive power advantages truth over falsehood, or if LLMs can promote...

Today's SJDM Featured Paper is:

Costello, T. H., Pelrine, K., Kowal, M., Arechar, A. A., Godbout, J.-F., Gleave, A., Rand, D., & Pennycook, G. (2026). Large language models can effectively convince people to believe conspiracies. arXiv. doi.org/10.48550/arX...

05.03.2026 13:58 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Standing on the Shoulders of Termites Or, how we learned to stop worrying and love AI (scientists)

I saw we were fighting about what AI will do to (social) science.

I have some thoughts (originally written for a book on scientific inquiry) and decided to turn them into my first substack article. Co-written with Nat Rabb!

tomstello.substack.com/p/standing-o...

05.03.2026 15:29 β€” πŸ‘ 18    πŸ” 4    πŸ’¬ 2    πŸ“Œ 1
Video thumbnail

🧡on my new paper "Synthetic personas distort the structure of human belief systems" w Roberto Cerina I'm v excited about...

🚨 Do synthetic samples look like human samples?

We compare 28 LLMs to the 2024 General Social Survey (GSS) to find out + develop host of diagnostics...

25.02.2026 19:46 β€” πŸ‘ 166    πŸ” 78    πŸ’¬ 6    πŸ“Œ 19

I'm hiring a postdoc at @cmu.edu (w/ far.ai & @dgrand.bsky.social + @gordpennycook.bsky.social)!

How do LLMs shape human beliefs β€” and what do we do about it? AI safety meets behavioral science.

Open to technical and social science backgrounds.

23.02.2026 18:46 β€” πŸ‘ 42    πŸ” 27    πŸ’¬ 1    πŸ“Œ 3
Preview
Postdoctoral Fellow – Costello Lab – Dietrich College Carnegie Mellon University, in collaboration with FAR.AI and Cornell University, is seeking a postdoctoral researcher for a one-year appointment (with the possibility of extension) in Dietrich College...

Flexible start, dedicated funding, first-authored papers, great team.

Apply or share!

cmu.wd5.myworkdayjobs.com/CMU/job/Pitt...

23.02.2026 18:46 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm hiring a postdoc at @cmu.edu (w/ far.ai & @dgrand.bsky.social + @gordpennycook.bsky.social)!

How do LLMs shape human beliefs β€” and what do we do about it? AI safety meets behavioral science.

Open to technical and social science backgrounds.

23.02.2026 18:46 β€” πŸ‘ 42    πŸ” 27    πŸ’¬ 1    πŸ“Œ 3
Post image

🚨 New paper out at @ajpseditor.bsky.social 🚨

Do the public hold meaningful attitudes? Using the case of abortion policy preferences, we provide strong evidence that policy prefrences can be coherent, stable over time, and causally explain vote choice.

doi.org/10.1111/ajps...

18.02.2026 23:26 β€” πŸ‘ 16    πŸ” 15    πŸ’¬ 1    πŸ“Œ 0
Post image

New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...

After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!

18.02.2026 22:22 β€” πŸ‘ 38    πŸ” 12    πŸ’¬ 2    πŸ“Œ 0

❀️

14.02.2026 01:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wow, thank you so much!!

14.02.2026 01:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The last social science paper to win this prize was in 1981 (!!!) for Axelrod and Hamilton's "The Evolution of Cooperation" (!!!!!!!!!!!!)

13.02.2026 23:20 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Huge thanks (thanks is not the word, or close) to my mentors+collaborators+coauthors @gordpennycook.bsky.social and @dgrand.bsky.social and to the many colleagues whose field-extending parallel research are helping this work flourish in, well, dialogue.

13.02.2026 23:20 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image Post image

Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).

13.02.2026 23:20 β€” πŸ‘ 55    πŸ” 6    πŸ’¬ 5    πŸ“Œ 1
Post image

🚨New WP: Can an AI voter guide (grounded in information from a nonpartisan, fact-checked source) help voters’ decision making? 🚨

We built and evaluated an LLM-based chatbot that provided voting info in CA & TX (N=2,474) right before the 2024 election. πŸ§΅πŸ‘‡

09.02.2026 20:56 β€” πŸ‘ 26    πŸ” 10    πŸ’¬ 3    πŸ“Œ 3

If the medium is the message, then the message of algorithmic content platforms is not expression or connection or freedom. It is salience above all else.

This is what is happening to us on these platforms. They cannot behave any other way.

28.01.2026 17:29 β€” πŸ‘ 507    πŸ” 26    πŸ’¬ 12    πŸ“Œ 1
Post image

The #1 New York Times bestseller The Sirens’ Call is now in paperback! Chris Hayes masterfully explores how our focus has been commodified & manipulated, urging us to reclaim control over our lives & future. The Sirens’ Call is the big-picture vision we urgently need to offer clarity & guidance.

27.01.2026 16:22 β€” πŸ‘ 119    πŸ” 16    πŸ’¬ 1    πŸ“Œ 5

After years in academia, I’m exploring data science and research roles in industry.

I'm a quant. social scientist (PhD Yale ’24, NYU) focused on causal inference, experiments, and large-scale data.

Feel free to get in touch or share; all leads appreciated. dwstommes@gmail.com

27.01.2026 18:45 β€” πŸ‘ 31    πŸ” 20    πŸ’¬ 0    πŸ“Œ 0

not to be pedantic but jfk was intentionally killed (still they must mean by the cia or something)

I agree with your point about tail risk. This is hard to estimate because there was some backfire (or measurement error) in the debunk condition too. What would a satisfying % of return to base?

27.01.2026 17:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β€œText / language conversations” is such a strange level of analysis β€” surely more accurate is β€œsome subset of potential text, arranged in a particular order, containing a particular kind of information”

Unknowns abound in latter framing.

27.01.2026 16:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yeah I recall one particular talk, to a social psych dept that could hardly be described as ignorant of the literature, where the bunking result slide produced a loud collective gasp (when I revealed the magnitude)

27.01.2026 16:50 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

we very thoroughly piloted the debrief and knew it would work. there are also best practices for debriefing in misinfo studies that we followed.

27.01.2026 16:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our latest collaboration with CMU, Cornell, and others: AI is just as effective at spreading conspiracy beliefs as it is at debunking them. We found a fix that works, showing that we need to make deliberate design choices. Full thread below.

20.01.2026 15:11 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...

β€˜LLMs can effectively convince people to believe conspiracies’

But telling the AI not to lie might help.

Details in thread

20.01.2026 14:59 β€” πŸ‘ 29    πŸ” 20    πŸ’¬ 1    πŸ“Œ 2
Preview
Large language models can effectively convince people to believe conspiracies Large language models (LLMs) have been shown to be persuasive across a variety of contexts. But it remains unclear whether this persuasive power advantages truth over falsehood, or if LLMs can promote...

Paper: arxiv.org/abs/2601.05050

Also, we're hiring a postdoc! More on this soon.

20.01.2026 14:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This work was supported by @schmidtsciences.bsky.social

Research by me, @kellinpelrine.bsky.social (FAR.AI), @matthewkowal.bsky.social (FAR.AI), Antonio Arechar (CIDE/MIT), Jean-FranΓ§ois Godbout (Mila), @gleave.me (FAR.AI), @dgrand.bsky.social (Cornell), and @gordpennycook.bsky.social (Cornell)

20.01.2026 14:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What does this mean? LLMs are persuasive tools with dual-use potential. When instructed to mislead, they comply + succeed.

Current guardrails don't prevent this. But innovations + deliberate design choices (like truth constraints) help.

The question is whether we'll make them.

20.01.2026 14:59 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0