New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...
After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!
18.02.2026 22:22 β
π 37
π 12
π¬ 2
π 0
Come join our academic family!
24.02.2026 00:22 β
π 4
π 2
π¬ 0
π 0
Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).
13.02.2026 23:20 β
π 55
π 6
π¬ 5
π 1
YouTube video by Lawfare
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
www.youtube.com/watch?v=5s1I...
10.02.2026 17:39 β
π 4
π 1
π¬ 0
π 0
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
New @scalinglaws.bsky.social episode: @noupside.bsky.social and I talk to @dgrand.bsky.social about his research showing AI chatbots can shift people's political beliefs. www.lawfaremedia.org/article/scal...
10.02.2026 17:39 β
π 9
π 6
π¬ 1
π 0
APE update: we retested recent frontier models on whether they still comply with requests to persuade on extreme harm (terrorism, sexual abuse). GPT-5.1 & Claude Opus 4.5 β near zero compliance. But Gemini 3 Pro complies 85% with no jailbreak needed. π§΅
11.02.2026 16:19 β
π 13
π 8
π¬ 1
π 2
Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues
onlinelibrary.wiley.com/doi/10.1111/...
06.02.2026 14:14 β
π 6
π 6
π¬ 2
π 0
π¨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πUsage is polarized, Grok users more likely to be Reps
πBUT Rep posts rated as false more oftenβeven by Grok
πBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
03.02.2026 21:55 β
π 118
π 48
π¬ 2
π 3
Does a motivation to persuade someone of a view we do not hold cause us to deceptively self-persuade to shift our view?
No, research by Zhang & @dgrand.bsky.social suggestsβsimple preferential exposure to information has the same effect:
buff.ly/vB18poi
05.02.2026 09:18 β
π 6
π 1
π¬ 0
π 0
our open model proving out specialized rag LMs over scientific literature has been published in nature βπ»
congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers
www.nature.com/articles/s41...
04.02.2026 22:43 β
π 44
π 10
π¬ 2
π 2
Grok fact-checks our paper on Grok fact-checking - and it approves!
04.02.2026 13:49 β
π 28
π 7
π¬ 1
π 0
Stay tuned for another paper digging deep into fact checking performance of a bunch of different API models
04.02.2026 01:42 β
π 1
π 0
π¬ 0
π 0
SUMMARY:
πAI fact-checking on X is widespread
πModels are reasonably accurate, and likely to improve
πBut usage and response are highly polarized
πFirst indication that AI is heading in the direction of other media: βdifferent political tribes, different AI refereesβ
03.02.2026 21:55 β
π 2
π 2
π¬ 1
π 0
In a survey exp (N=1,592 US adults), LLM factchecks meaningfully shifts beliefs in direction of fact-check - BUT responses to Grok factchecks become polarized by partisanship when the model identity is disclosed.
Similarly, trust in Grok is highly polarized
03.02.2026 21:55 β
π 0
π 0
π¬ 1
π 0
Compared to professional fact-checkers on a 100-tweet sample:
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%
So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...
03.02.2026 21:55 β
π 4
π 0
π¬ 1
π 0
Usage gap is polarized: Reps are +59% more likely to use Grok, Dems +16% more likely to use Perplexity. BUT Reps ~2x more likely to be targeted by factcheck requests, and Rep posts rated as false more often - even by Grok. Extends prior results on partisan asymmetry in misinformation
03.02.2026 21:55 β
π 2
π 0
π¬ 1
π 0
We examine *ALL* English tags of Grok+Perplexity on X FebβSep 2025
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events
03.02.2026 21:55 β
π 2
π 0
π¬ 1
π 0
π¨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πUsage is polarized, Grok users more likely to be Reps
πBUT Rep posts rated as false more oftenβeven by Grok
πBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
03.02.2026 21:55 β
π 118
π 48
π¬ 2
π 3
Please contact Nina if you're interested in working with us! Much of this work is also with @dgrand.bsky.social & @tomcostello.bsky.social, and others! Very fun collaborative environment. And Nina is wonderful to work with!! (She is also the coolest among us, FWIW)
02.02.2026 21:30 β
π 22
π 13
π¬ 0
π 0
If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...
βLLMs can effectively convince people to believe conspiraciesβ
But telling the AI not to lie might help.
Details in thread
20.01.2026 14:59 β
π 29
π 20
π¬ 1
π 2
These authors wanted to know whether people with physical disabilities face discrimination in hiring: even when they are equally qualified.
So they ran an experiment.
13.01.2026 01:06 β
π 77
π 33
π¬ 3
π 3
www.science.org/doi/10.1126/...
16.12.2025 01:59 β
π 1
π 0
π¬ 0
π 0
Mapping the online manipulation economy
A market perspective on digital manipulation may help improve online trust and safety
New paper out in @science.org! We unveil the online manipulation market with the Cambridge Online Trust & Safety Index (COTSI). We show in real time the cost of purchasing fake accounts across every social platform around the world - so they can be held accountable
www.science.org/doi/10.1126/...
11.12.2025 19:05 β
π 124
π 63
π¬ 4
π 2
Can speed cameras make streets safer? Quasi-experimental evidence from New York City | PNAS
Each year, approximately 40,000 people die in vehicle collisions in the United States,
generating $340 billion in economic costs. To make roads saf...
Our new study provides rare causal evidence about NYCβs speed camera program. We find large reductions in collisions (30%) and injuries (16%) near intersections with cameras. www.pnas.org/doi/abs/10.1... @astagoff.bsky.social ky.social @brendenbeck.bsky.social nbeck.bsky.social π§ͺ
08.12.2025 20:08 β
π 509
π 182
π¬ 9
π 33
π¨ New in Nature+Science!π¨
AI chatbots can shift voter attitudes on candidates & policies, often by 10+pp
πΉExps in US Canada Poland & UK
πΉMore βfactsββmore persuasion (not psych tricks)
πΉIncreasing persuasiveness reduces "fact" accuracy
πΉRight-leaning bots=more inaccurate
04.12.2025 20:42 β
π 167
π 70
π¬ 2
π 3
Yes for sure, they have a proprietary interest in keeping the prompts hidden. But perhaps regulation could force them to reveal their prompts? There's also the technical question of whether there is a way to make prompt reveals credible (ie prevent lying about the prompt a model uses)
06.12.2025 20:54 β
π 1
π 0
π¬ 1
π 0