David Rand's Avatar

David Rand

@dgrand.bsky.social

Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology. https://www.DaveRand.org/

32,231 Followers  |  1,677 Following  |  327 Posts  |  Joined: 19.09.2023
Posts Following

Posts by David Rand (@dgrand.bsky.social)


Post image

New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...

After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!

18.02.2026 22:22 β€” πŸ‘ 37    πŸ” 12    πŸ’¬ 2    πŸ“Œ 0

Come join our academic family!

24.02.2026 00:22 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).

13.02.2026 23:20 β€” πŸ‘ 55    πŸ” 6    πŸ’¬ 5    πŸ“Œ 1
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
YouTube video by Lawfare Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

www.youtube.com/watch?v=5s1I...

10.02.2026 17:39 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

New @scalinglaws.bsky.social episode: @noupside.bsky.social and I talk to @dgrand.bsky.social about his research showing AI chatbots can shift people's political beliefs. www.lawfaremedia.org/article/scal...

10.02.2026 17:39 β€” πŸ‘ 9    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Post image

APE update: we retested recent frontier models on whether they still comply with requests to persuade on extreme harm (terrorism, sexual abuse). GPT-5.1 & Claude Opus 4.5 β†’ near zero compliance. But Gemini 3 Pro complies 85% with no jailbreak needed. 🧡

11.02.2026 16:19 β€” πŸ‘ 13    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2
Post image

Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues

onlinelibrary.wiley.com/doi/10.1111/...

06.02.2026 14:14 β€” πŸ‘ 6    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πŸ“ŒUsage is polarized, Grok users more likely to be Reps
πŸ“ŒBUT Rep posts rated as false more oftenβ€”even by Grok
πŸ“ŒBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

03.02.2026 21:55 β€” πŸ‘ 118    πŸ” 48    πŸ’¬ 2    πŸ“Œ 3
Post image

Does a motivation to persuade someone of a view we do not hold cause us to deceptively self-persuade to shift our view?

No, research by Zhang & @dgrand.bsky.social suggestsβ€”simple preferential exposure to information has the same effect:

buff.ly/vB18poi

05.02.2026 09:18 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

our open model proving out specialized rag LMs over scientific literature has been published in nature ✌🏻

congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers

www.nature.com/articles/s41...

04.02.2026 22:43 β€” πŸ‘ 44    πŸ” 10    πŸ’¬ 2    πŸ“Œ 2

Grok fact-checks our paper on Grok fact-checking - and it approves!

04.02.2026 13:49 β€” πŸ‘ 28    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0

Stay tuned for another paper digging deep into fact checking performance of a bunch of different API models

04.02.2026 01:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Human-AI dialogue papers Human-AI dialogue research from the team of David Rand, Gordon Pennycook, and Tom Costello Durably reducing conspiracy beliefs through dialogues with AI Science 2024 [NYTimes write up] [MIT Tech Revi...

Grateful as always to amazing coauthors @thomasrenault.bsky.social @mmosleh.bsky.social
and you can check out other papers from my group on human-AI interaction here: docs.google.com/document/d/1...

03.02.2026 21:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

SUMMARY:
πŸ“ŒAI fact-checking on X is widespread
πŸ“ŒModels are reasonably accurate, and likely to improve
πŸ“ŒBut usage and response are highly polarized
πŸ“ŒFirst indication that AI is heading in the direction of other media: β€œdifferent political tribes, different AI referees”

03.02.2026 21:55 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

In a survey exp (N=1,592 US adults), LLM factchecks meaningfully shifts beliefs in direction of fact-check - BUT responses to Grok factchecks become polarized by partisanship when the model identity is disclosed.
Similarly, trust in Grok is highly polarized

03.02.2026 21:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Compared to professional fact-checkers on a 100-tweet sample:
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%

So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...

03.02.2026 21:55 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Usage gap is polarized: Reps are +59% more likely to use Grok, Dems +16% more likely to use Perplexity. BUT Reps ~2x more likely to be targeted by factcheck requests, and Rep posts rated as false more often - even by Grok. Extends prior results on partisan asymmetry in misinformation

03.02.2026 21:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We examine *ALL* English tags of Grok+Perplexity on X Feb–Sep 2025
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events

03.02.2026 21:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πŸ“ŒUsage is polarized, Grok users more likely to be Reps
πŸ“ŒBUT Rep posts rated as false more oftenβ€”even by Grok
πŸ“ŒBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

03.02.2026 21:55 β€” πŸ‘ 118    πŸ” 48    πŸ’¬ 2    πŸ“Œ 3

Please contact Nina if you're interested in working with us! Much of this work is also with @dgrand.bsky.social & @tomcostello.bsky.social, and others! Very fun collaborative environment. And Nina is wonderful to work with!! (She is also the coolest among us, FWIW)

02.02.2026 21:30 β€” πŸ‘ 22    πŸ” 13    πŸ’¬ 0    πŸ“Œ 0
Preview
@Grok is this true: How X’s chatbot performs as a fact-checking tool New research explores whether the chatbot might replace the crowdsourced fact-checking program – and what that might mean for getting to the truth on X

New on @indicator.media: "@grok is this true" was the single most frequent reply tagging X's AI chatbot in the six months following its launch.

28.01.2026 13:35 β€” πŸ‘ 30    πŸ” 10    πŸ’¬ 2    πŸ“Œ 0
Video thumbnail

If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...

β€˜LLMs can effectively convince people to believe conspiracies’

But telling the AI not to lie might help.

Details in thread

20.01.2026 14:59 β€” πŸ‘ 29    πŸ” 20    πŸ’¬ 1    πŸ“Œ 2
Post image

These authors wanted to know whether people with physical disabilities face discrimination in hiring: even when they are equally qualified.

So they ran an experiment.

13.01.2026 01:06 β€” πŸ‘ 77    πŸ” 33    πŸ’¬ 3    πŸ“Œ 3
Preview
Marginal Returns to Public Universities Abstract. This paper studies the returns to enrolling in American public universities by comparing the long-term outcomes of barely admitted versus barely

Recently accepted by #QJE, β€œMarginal Returns to Public Universities,” by Jack Mountjoy: doi.org/10.1093/qje/...

24.12.2025 16:47 β€” πŸ‘ 56    πŸ” 18    πŸ’¬ 0    πŸ“Œ 4

www.science.org/doi/10.1126/...

16.12.2025 01:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Mapping the online manipulation economy A market perspective on digital manipulation may help improve online trust and safety

New paper out in @science.org! We unveil the online manipulation market with the Cambridge Online Trust & Safety Index (COTSI). We show in real time the cost of purchasing fake accounts across every social platform around the world - so they can be held accountable

www.science.org/doi/10.1126/...

11.12.2025 19:05 β€” πŸ‘ 124    πŸ” 63    πŸ’¬ 4    πŸ“Œ 2
Preview
Representation in science and trust in scientists in the USA - Nature Human Behaviour Druckman et al. document gaps in trust in scientists in the USA. People from groups less represented among scientists (for example, women and those with lower economic status) are less trusting. Incre...

Paper out today in @nathumbehav.nature.com:

1) those groups (women, African Americans, lower SES, rural) that are underrepresented in science have been less trusting of science.

2) If you improve representation in science, you improve trust among those groups.

www.nature.com/articles/s41...

09.12.2025 04:21 β€” πŸ‘ 137    πŸ” 61    πŸ’¬ 3    πŸ“Œ 1
Preview
Can speed cameras make streets safer? Quasi-experimental evidence from New York City | PNAS Each year, approximately 40,000 people die in vehicle collisions in the United States, generating $340 billion in economic costs. To make roads saf...

Our new study provides rare causal evidence about NYC’s speed camera program. We find large reductions in collisions (30%) and injuries (16%) near intersections with cameras. www.pnas.org/doi/abs/10.1... @astagoff.bsky.social ky.social @brendenbeck.bsky.social nbeck.bsky.social πŸ§ͺ

08.12.2025 20:08 β€” πŸ‘ 509    πŸ” 182    πŸ’¬ 9    πŸ“Œ 33
Post image Post image

🚨 New in Nature+Science!🚨
AI chatbots can shift voter attitudes on candidates & policies, often by 10+pp
πŸ”ΉExps in US Canada Poland & UK
πŸ”ΉMore β€œfacts”→more persuasion (not psych tricks)
πŸ”ΉIncreasing persuasiveness reduces "fact" accuracy
πŸ”ΉRight-leaning bots=more inaccurate

04.12.2025 20:42 β€” πŸ‘ 167    πŸ” 70    πŸ’¬ 2    πŸ“Œ 3

Yes for sure, they have a proprietary interest in keeping the prompts hidden. But perhaps regulation could force them to reveal their prompts? There's also the technical question of whether there is a way to make prompt reveals credible (ie prevent lying about the prompt a model uses)

06.12.2025 20:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0