But I agree collapse is a possibility and the epigraph nods to that
05.03.2026 23:31 β π 0 π 0 π¬ 1 π 0But I agree collapse is a possibility and the epigraph nods to that
05.03.2026 23:31 β π 0 π 0 π¬ 1 π 0interesting, thanks! very cool paper. we are not proposing that anyone alter scientific practice for short-term profits. i think science is different from social media case? also you argue that inaction is a choice that cedes control to profit-driven actors -- i think same applies to AI.
05.03.2026 23:23 β π 0 π 0 π¬ 1 π 0Hey thanks a ton Daryl!! We were pleased with how it turned out
05.03.2026 18:22 β π 1 π 0 π¬ 0 π 0Thanks man! Yes me too...
05.03.2026 17:33 β π 0 π 0 π¬ 0 π 0
Today's SJDM Featured Paper is:
Costello, T. H., Pelrine, K., Kowal, M., Arechar, A. A., Godbout, J.-F., Gleave, A., Rand, D., & Pennycook, G. (2026). Large language models can effectively convince people to believe conspiracies. arXiv. doi.org/10.48550/arX...
I saw we were fighting about what AI will do to (social) science.
I have some thoughts (originally written for a book on scientific inquiry) and decided to turn them into my first substack article. Co-written with Nat Rabb!
tomstello.substack.com/p/standing-o...
π§΅on my new paper "Synthetic personas distort the structure of human belief systems" w Roberto Cerina I'm v excited about...
π¨ Do synthetic samples look like human samples?
We compare 28 LLMs to the 2024 General Social Survey (GSS) to find out + develop host of diagnostics...
I'm hiring a postdoc at @cmu.edu (w/ far.ai & @dgrand.bsky.social + @gordpennycook.bsky.social)!
How do LLMs shape human beliefs β and what do we do about it? AI safety meets behavioral science.
Open to technical and social science backgrounds.
Flexible start, dedicated funding, first-authored papers, great team.
Apply or share!
cmu.wd5.myworkdayjobs.com/CMU/job/Pitt...
I'm hiring a postdoc at @cmu.edu (w/ far.ai & @dgrand.bsky.social + @gordpennycook.bsky.social)!
How do LLMs shape human beliefs β and what do we do about it? AI safety meets behavioral science.
Open to technical and social science backgrounds.
π¨ New paper out at @ajpseditor.bsky.social π¨
Do the public hold meaningful attitudes? Using the case of abortion policy preferences, we provide strong evidence that policy prefrences can be coherent, stable over time, and causally explain vote choice.
doi.org/10.1111/ajps...
New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...
After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!
β€οΈ
14.02.2026 01:09 β π 1 π 0 π¬ 0 π 0Wow, thank you so much!!
14.02.2026 01:05 β π 1 π 0 π¬ 0 π 0The last social science paper to win this prize was in 1981 (!!!) for Axelrod and Hamilton's "The Evolution of Cooperation" (!!!!!!!!!!!!)
13.02.2026 23:20 β π 7 π 0 π¬ 0 π 0Huge thanks (thanks is not the word, or close) to my mentors+collaborators+coauthors @gordpennycook.bsky.social and @dgrand.bsky.social and to the many colleagues whose field-extending parallel research are helping this work flourish in, well, dialogue.
13.02.2026 23:20 β π 4 π 0 π¬ 2 π 0Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).
13.02.2026 23:20 β π 55 π 6 π¬ 5 π 1
π¨New WP: Can an AI voter guide (grounded in information from a nonpartisan, fact-checked source) help votersβ decision making? π¨
We built and evaluated an LLM-based chatbot that provided voting info in CA & TX (N=2,474) right before the 2024 election. π§΅π
If the medium is the message, then the message of algorithmic content platforms is not expression or connection or freedom. It is salience above all else.
This is what is happening to us on these platforms. They cannot behave any other way.
The #1 New York Times bestseller The Sirensβ Call is now in paperback! Chris Hayes masterfully explores how our focus has been commodified & manipulated, urging us to reclaim control over our lives & future. The Sirensβ Call is the big-picture vision we urgently need to offer clarity & guidance.
27.01.2026 16:22 β π 119 π 16 π¬ 1 π 5
After years in academia, Iβm exploring data science and research roles in industry.
I'm a quant. social scientist (PhD Yale β24, NYU) focused on causal inference, experiments, and large-scale data.
Feel free to get in touch or share; all leads appreciated. dwstommes@gmail.com
not to be pedantic but jfk was intentionally killed (still they must mean by the cia or something)
I agree with your point about tail risk. This is hard to estimate because there was some backfire (or measurement error) in the debunk condition too. What would a satisfying % of return to base?
βText / language conversationsβ is such a strange level of analysis β surely more accurate is βsome subset of potential text, arranged in a particular order, containing a particular kind of informationβ
Unknowns abound in latter framing.
Yeah I recall one particular talk, to a social psych dept that could hardly be described as ignorant of the literature, where the bunking result slide produced a loud collective gasp (when I revealed the magnitude)
27.01.2026 16:50 β π 2 π 0 π¬ 0 π 0we very thoroughly piloted the debrief and knew it would work. there are also best practices for debriefing in misinfo studies that we followed.
27.01.2026 16:42 β π 0 π 0 π¬ 1 π 0Our latest collaboration with CMU, Cornell, and others: AI is just as effective at spreading conspiracy beliefs as it is at debunking them. We found a fix that works, showing that we need to make deliberate design choices. Full thread below.
20.01.2026 15:11 β π 4 π 3 π¬ 0 π 0
If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...
βLLMs can effectively convince people to believe conspiraciesβ
But telling the AI not to lie might help.
Details in thread
Paper: arxiv.org/abs/2601.05050
Also, we're hiring a postdoc! More on this soon.
This work was supported by @schmidtsciences.bsky.social
Research by me, @kellinpelrine.bsky.social (FAR.AI), @matthewkowal.bsky.social (FAR.AI), Antonio Arechar (CIDE/MIT), Jean-FranΓ§ois Godbout (Mila), @gleave.me (FAR.AI), @dgrand.bsky.social (Cornell), and @gordpennycook.bsky.social (Cornell)
What does this mean? LLMs are persuasive tools with dual-use potential. When instructed to mislead, they comply + succeed.
Current guardrails don't prevent this. But innovations + deliberate design choices (like truth constraints) help.
The question is whether we'll make them.