David Rand's Avatar

David Rand

@dgrand.bsky.social

Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology. https://www.DaveRand.org/

32,235 Followers  |  1,677 Following  |  327 Posts  |  Joined: 19.09.2023
Posts Following

Posts by David Rand (@dgrand.bsky.social)

Preview
The political effects of X’s feed algorithm - Nature Among users initially on a chronological feed, 7 weeks of exposure to X’s algorithmic feed in 2023 shifted political attitudes and account-following behaviour in a more conservative direction compared...

Check out this recent @nature.com paper reporting a field experiment on X. It shows X's algorithm boosts conservative content and downranks traditional media—shifting users’ views on key issues. Switching to chronological doesn’t reverse the effect. www.nature.com/articles/s41...

01.03.2026 23:56 — 👍 92    🔁 41    💬 3    📌 2
Preview
Do You Agree? Do You Strongly Agree? The Effect of the Number of Response Categories on Response Processes and Verification of Substantive Hypotheses Abstract. This study investigates how the number and labeling of response categories in survey scales affect respondent behavior, psychometric properties,

This is consistent with earlier psychometric work that suggests 5-7 is the best response scale options, but good to see that the finding holds up in contemporary research. Also, good to see that labeling scales whether anchored or not has little impact on findings. academic.oup.com/ijpor/articl...

02.03.2026 00:21 — 👍 83    🔁 31    💬 0    📌 1
Post image

New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...

After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!

18.02.2026 22:22 — 👍 38    🔁 12    💬 2    📌 0

Come join our academic family!

24.02.2026 00:22 — 👍 4    🔁 2    💬 0    📌 0
Post image Post image

Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).

13.02.2026 23:20 — 👍 55    🔁 6    💬 5    📌 1
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
YouTube video by Lawfare Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

www.youtube.com/watch?v=5s1I...

10.02.2026 17:39 — 👍 4    🔁 1    💬 0    📌 0
Preview
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

New @scalinglaws.bsky.social episode: @noupside.bsky.social and I talk to @dgrand.bsky.social about his research showing AI chatbots can shift people's political beliefs. www.lawfaremedia.org/article/scal...

10.02.2026 17:39 — 👍 9    🔁 6    💬 1    📌 0
Post image

APE update: we retested recent frontier models on whether they still comply with requests to persuade on extreme harm (terrorism, sexual abuse). GPT-5.1 & Claude Opus 4.5 → near zero compliance. But Gemini 3 Pro complies 85% with no jailbreak needed. 🧵

11.02.2026 16:19 — 👍 13    🔁 8    💬 1    📌 2
Post image

Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues

onlinelibrary.wiley.com/doi/10.1111/...

06.02.2026 14:14 — 👍 6    🔁 6    💬 2    📌 0
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

03.02.2026 21:55 — 👍 118    🔁 48    💬 2    📌 3
Post image

Does a motivation to persuade someone of a view we do not hold cause us to deceptively self-persuade to shift our view?

No, research by Zhang & @dgrand.bsky.social suggests—simple preferential exposure to information has the same effect:

buff.ly/vB18poi

05.02.2026 09:18 — 👍 6    🔁 1    💬 0    📌 0
Post image

our open model proving out specialized rag LMs over scientific literature has been published in nature ✌🏻

congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers

www.nature.com/articles/s41...

04.02.2026 22:43 — 👍 44    🔁 10    💬 2    📌 2

Grok fact-checks our paper on Grok fact-checking - and it approves!

04.02.2026 13:49 — 👍 28    🔁 7    💬 1    📌 0

Stay tuned for another paper digging deep into fact checking performance of a bunch of different API models

04.02.2026 01:42 — 👍 1    🔁 0    💬 0    📌 0
Preview
Human-AI dialogue papers Human-AI dialogue research from the team of David Rand, Gordon Pennycook, and Tom Costello Durably reducing conspiracy beliefs through dialogues with AI Science 2024 [NYTimes write up] [MIT Tech Revi...

Grateful as always to amazing coauthors @thomasrenault.bsky.social @mmosleh.bsky.social
and you can check out other papers from my group on human-AI interaction here: docs.google.com/document/d/1...

03.02.2026 21:55 — 👍 3    🔁 0    💬 0    📌 0

SUMMARY:
📌AI fact-checking on X is widespread
📌Models are reasonably accurate, and likely to improve
📌But usage and response are highly polarized
📌First indication that AI is heading in the direction of other media: “different political tribes, different AI referees”

03.02.2026 21:55 — 👍 2    🔁 2    💬 1    📌 0
Post image

In a survey exp (N=1,592 US adults), LLM factchecks meaningfully shifts beliefs in direction of fact-check - BUT responses to Grok factchecks become polarized by partisanship when the model identity is disclosed.
Similarly, trust in Grok is highly polarized

03.02.2026 21:55 — 👍 0    🔁 0    💬 1    📌 0
Post image

Compared to professional fact-checkers on a 100-tweet sample:
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%

So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...

03.02.2026 21:55 — 👍 4    🔁 0    💬 1    📌 0
Post image

Usage gap is polarized: Reps are +59% more likely to use Grok, Dems +16% more likely to use Perplexity. BUT Reps ~2x more likely to be targeted by factcheck requests, and Rep posts rated as false more often - even by Grok. Extends prior results on partisan asymmetry in misinformation

03.02.2026 21:55 — 👍 2    🔁 0    💬 1    📌 0
Post image

We examine *ALL* English tags of Grok+Perplexity on X Feb–Sep 2025
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events

03.02.2026 21:55 — 👍 2    🔁 0    💬 1    📌 0
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

03.02.2026 21:55 — 👍 118    🔁 48    💬 2    📌 3

Please contact Nina if you're interested in working with us! Much of this work is also with @dgrand.bsky.social & @tomcostello.bsky.social, and others! Very fun collaborative environment. And Nina is wonderful to work with!! (She is also the coolest among us, FWIW)

02.02.2026 21:30 — 👍 22    🔁 13    💬 0    📌 0
Preview
@Grok is this true: How X’s chatbot performs as a fact-checking tool New research explores whether the chatbot might replace the crowdsourced fact-checking program – and what that might mean for getting to the truth on X

New on @indicator.media: "@grok is this true" was the single most frequent reply tagging X's AI chatbot in the six months following its launch.

28.01.2026 13:35 — 👍 30    🔁 10    💬 2    📌 0
Video thumbnail

If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...

‘LLMs can effectively convince people to believe conspiracies’

But telling the AI not to lie might help.

Details in thread

20.01.2026 14:59 — 👍 29    🔁 20    💬 1    📌 2
Post image

These authors wanted to know whether people with physical disabilities face discrimination in hiring: even when they are equally qualified.

So they ran an experiment.

13.01.2026 01:06 — 👍 77    🔁 33    💬 3    📌 3
Preview
Marginal Returns to Public Universities Abstract. This paper studies the returns to enrolling in American public universities by comparing the long-term outcomes of barely admitted versus barely

Recently accepted by #QJE, “Marginal Returns to Public Universities,” by Jack Mountjoy: doi.org/10.1093/qje/...

24.12.2025 16:47 — 👍 56    🔁 18    💬 0    📌 4

www.science.org/doi/10.1126/...

16.12.2025 01:59 — 👍 1    🔁 0    💬 0    📌 0
Preview
Mapping the online manipulation economy A market perspective on digital manipulation may help improve online trust and safety

New paper out in @science.org! We unveil the online manipulation market with the Cambridge Online Trust & Safety Index (COTSI). We show in real time the cost of purchasing fake accounts across every social platform around the world - so they can be held accountable

www.science.org/doi/10.1126/...

11.12.2025 19:05 — 👍 124    🔁 63    💬 4    📌 2
Preview
Representation in science and trust in scientists in the USA - Nature Human Behaviour Druckman et al. document gaps in trust in scientists in the USA. People from groups less represented among scientists (for example, women and those with lower economic status) are less trusting. Incre...

Paper out today in @nathumbehav.nature.com:

1) those groups (women, African Americans, lower SES, rural) that are underrepresented in science have been less trusting of science.

2) If you improve representation in science, you improve trust among those groups.

www.nature.com/articles/s41...

09.12.2025 04:21 — 👍 137    🔁 61    💬 3    📌 1
Preview
Can speed cameras make streets safer? Quasi-experimental evidence from New York City | PNAS Each year, approximately 40,000 people die in vehicle collisions in the United States, generating $340 billion in economic costs. To make roads saf...

Our new study provides rare causal evidence about NYC’s speed camera program. We find large reductions in collisions (30%) and injuries (16%) near intersections with cameras. www.pnas.org/doi/abs/10.1... @astagoff.bsky.social ky.social @brendenbeck.bsky.social nbeck.bsky.social 🧪

08.12.2025 20:08 — 👍 509    🔁 182    💬 9    📌 33