The political effects of X’s feed algorithm - Nature
Among users initially on a chronological feed, 7 weeks of exposure to X’s algorithmic feed in 2023 shifted political attitudes and account-following behaviour in a more conservative direction compared...
Check out this recent @nature.com paper reporting a field experiment on X. It shows X's algorithm boosts conservative content and downranks traditional media—shifting users’ views on key issues. Switching to chronological doesn’t reverse the effect. www.nature.com/articles/s41...
01.03.2026 23:56 —
👍 92
🔁 41
💬 3
📌 2
New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...
After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!
18.02.2026 22:22 —
👍 38
🔁 12
💬 2
📌 0
Come join our academic family!
24.02.2026 00:22 —
👍 4
🔁 2
💬 0
📌 0
Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).
13.02.2026 23:20 —
👍 55
🔁 6
💬 5
📌 1
YouTube video by Lawfare
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
www.youtube.com/watch?v=5s1I...
10.02.2026 17:39 —
👍 4
🔁 1
💬 0
📌 0
Scaling Laws: The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
New @scalinglaws.bsky.social episode: @noupside.bsky.social and I talk to @dgrand.bsky.social about his research showing AI chatbots can shift people's political beliefs. www.lawfaremedia.org/article/scal...
10.02.2026 17:39 —
👍 9
🔁 6
💬 1
📌 0
APE update: we retested recent frontier models on whether they still comply with requests to persuade on extreme harm (terrorism, sexual abuse). GPT-5.1 & Claude Opus 4.5 → near zero compliance. But Gemini 3 Pro complies 85% with no jailbreak needed. 🧵
11.02.2026 16:19 —
👍 13
🔁 8
💬 1
📌 2
Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues
onlinelibrary.wiley.com/doi/10.1111/...
06.02.2026 14:14 —
👍 6
🔁 6
💬 2
📌 0
🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
03.02.2026 21:55 —
👍 118
🔁 48
💬 2
📌 3
Does a motivation to persuade someone of a view we do not hold cause us to deceptively self-persuade to shift our view?
No, research by Zhang & @dgrand.bsky.social suggests—simple preferential exposure to information has the same effect:
buff.ly/vB18poi
05.02.2026 09:18 —
👍 6
🔁 1
💬 0
📌 0
our open model proving out specialized rag LMs over scientific literature has been published in nature ✌🏻
congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers
www.nature.com/articles/s41...
04.02.2026 22:43 —
👍 44
🔁 10
💬 2
📌 2
Grok fact-checks our paper on Grok fact-checking - and it approves!
04.02.2026 13:49 —
👍 28
🔁 7
💬 1
📌 0
Stay tuned for another paper digging deep into fact checking performance of a bunch of different API models
04.02.2026 01:42 —
👍 1
🔁 0
💬 0
📌 0
SUMMARY:
📌AI fact-checking on X is widespread
📌Models are reasonably accurate, and likely to improve
📌But usage and response are highly polarized
📌First indication that AI is heading in the direction of other media: “different political tribes, different AI referees”
03.02.2026 21:55 —
👍 2
🔁 2
💬 1
📌 0
In a survey exp (N=1,592 US adults), LLM factchecks meaningfully shifts beliefs in direction of fact-check - BUT responses to Grok factchecks become polarized by partisanship when the model identity is disclosed.
Similarly, trust in Grok is highly polarized
03.02.2026 21:55 —
👍 0
🔁 0
💬 1
📌 0
Compared to professional fact-checkers on a 100-tweet sample:
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%
So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...
03.02.2026 21:55 —
👍 4
🔁 0
💬 1
📌 0
Usage gap is polarized: Reps are +59% more likely to use Grok, Dems +16% more likely to use Perplexity. BUT Reps ~2x more likely to be targeted by factcheck requests, and Rep posts rated as false more often - even by Grok. Extends prior results on partisan asymmetry in misinformation
03.02.2026 21:55 —
👍 2
🔁 0
💬 1
📌 0
We examine *ALL* English tags of Grok+Perplexity on X Feb–Sep 2025
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events
03.02.2026 21:55 —
👍 2
🔁 0
💬 1
📌 0
🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
03.02.2026 21:55 —
👍 118
🔁 48
💬 2
📌 3
Please contact Nina if you're interested in working with us! Much of this work is also with @dgrand.bsky.social & @tomcostello.bsky.social, and others! Very fun collaborative environment. And Nina is wonderful to work with!! (She is also the coolest among us, FWIW)
02.02.2026 21:30 —
👍 22
🔁 13
💬 0
📌 0
If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...
‘LLMs can effectively convince people to believe conspiracies’
But telling the AI not to lie might help.
Details in thread
20.01.2026 14:59 —
👍 29
🔁 20
💬 1
📌 2
These authors wanted to know whether people with physical disabilities face discrimination in hiring: even when they are equally qualified.
So they ran an experiment.
13.01.2026 01:06 —
👍 77
🔁 33
💬 3
📌 3
www.science.org/doi/10.1126/...
16.12.2025 01:59 —
👍 1
🔁 0
💬 0
📌 0
Mapping the online manipulation economy
A market perspective on digital manipulation may help improve online trust and safety
New paper out in @science.org! We unveil the online manipulation market with the Cambridge Online Trust & Safety Index (COTSI). We show in real time the cost of purchasing fake accounts across every social platform around the world - so they can be held accountable
www.science.org/doi/10.1126/...
11.12.2025 19:05 —
👍 124
🔁 63
💬 4
📌 2
Can speed cameras make streets safer? Quasi-experimental evidence from New York City | PNAS
Each year, approximately 40,000 people die in vehicle collisions in the United States,
generating $340 billion in economic costs. To make roads saf...
Our new study provides rare causal evidence about NYC’s speed camera program. We find large reductions in collisions (30%) and injuries (16%) near intersections with cameras. www.pnas.org/doi/abs/10.1... @astagoff.bsky.social ky.social @brendenbeck.bsky.social nbeck.bsky.social 🧪
08.12.2025 20:08 —
👍 509
🔁 182
💬 9
📌 33