David Duvenaud on why ‘aligned AI’ could still kill democracy | 80,000 Hours
A new paper co-authored by SRI Chair @davidduvenaud.bsky.social examines “gradual disempowerment”: how incremental AI deployment could steadily reduce human influence over the economy, culture, and the state—without a single abrupt takeover.
80000Hours feature: 80000hours.org/podcast/epis...
03.02.2026 18:41 —
👍 3
🔁 1
💬 0
📌 0
YouTube video by 80,000 Hours
Artificial General Intelligence leads to oligarchy | David Duvenaud, ex-Anthropic
My interview with Rob Wilblin on Gradual Disempowerment is up: www.youtube.com/watch?v=XV3e...
I make the case that, even if we solve the technical problem of aligning powerful AIs, that our institutions, culture, and governments will serve us less well once we're all drags on growth.
27.01.2026 18:39 —
👍 3
🔁 0
💬 0
📌 0
Makes sense, thanks for elaborating.
18.11.2025 17:18 —
👍 1
🔁 0
💬 0
📌 0
Your claim is correct, but that seems like a pretty contrived example, no?
18.11.2025 15:51 —
👍 0
🔁 0
💬 1
📌 0
That's great. But do you have any idea of the magnitude of change in odds in various circumstances? Surely this was examined by public health people?
04.11.2025 15:34 —
👍 1
🔁 0
💬 2
📌 0
I'm happy for you. How much difference do you think it makes in your reinfection odds whether other people mask?
04.11.2025 15:19 —
👍 1
🔁 0
💬 1
📌 0
Iason Gabriel of Google Deepmind on Resisting Disempowerment
Atoosa Kasirzadeh of CMU on "Taking post-AGI human power seriously"
Deger Turan, CEO of Metaculus on "Concrete Mechanisms for Slow Loss of Control"
28.10.2025 22:06 —
👍 1
🔁 0
💬 1
📌 0
Beren Millidge of Zyphra on "When does competition lead to recognisable values?"
Anna Yelizarova of Windfall Trust on "What would UBI actually entail?"
Ivan Vendrov of Midjourney on "Supercooperation as an alternative to Superintelligence"
28.10.2025 22:06 —
👍 1
🔁 0
💬 1
📌 0
The draft program features:
Anton Korinek on the Economics of Transformative AI
Alex Tamkin of Anthropic on "The fractal nature of automation vs. augmentation"
Anders Sandberg on "Cyborg Leviathans and Human Niche Construction"
28.10.2025 22:06 —
👍 0
🔁 0
💬 1
📌 0
How might the world look after the development of AGI, and what should we do about it now? Help us think about this at our workshop on Post-AGI Economics, Culture and Governance!
We’ll host speakers from political theory, economics, mechanism design, history, and hierarchical agency.
post-agi.org
28.10.2025 22:06 —
👍 8
🔁 2
💬 1
📌 0
What's the difference, in your view?
25.10.2025 17:37 —
👍 3
🔁 0
💬 1
📌 0
More generally, we worry that liberalism itself is under threat - that the positive-sum-ness of laissez-faire governance won’t hold when citizens are mostly fighting over UBI. We hope we’re wrong!
19.09.2025 21:04 —
👍 1
🔁 0
💬 1
📌 0
“So far, we humans have been steering our civilisation on easy mode—wherever people went, they were indispensable. Now we have to hit a dauntingly narrow target: to create a civilisation that will care for us indefinitely—even when it doesn’t need us.”
19.09.2025 21:04 —
👍 1
🔁 0
💬 1
📌 0
“The average North Korean farmer has almost no power over the state, but they are still useful. The state can’t function unless it feeds its citizens. In an era of general automation, even this minimal duty of care will go.”
19.09.2025 21:04 —
👍 0
🔁 0
💬 1
📌 0
“The right to vote is the most visible sign of human influence over the state. But consider all the other levers of influence that come from economic power, such as lobbying, protesting and striking, which would also be eroded by mass automation.”
19.09.2025 21:04 —
👍 0
🔁 0
💬 1
📌 0
Some highlights:
“Democracies are still quite young, and were made possible only by technologies that made liberal, pluralistic societies globally competitive. We’re fortunate to have lived through this great confluence of human flourishing and state power, but we can’t take it for granted.”
19.09.2025 21:04 —
👍 0
🔁 0
💬 1
📌 0
Me and Raymond Douglas on how AI job loss could hurt democracy. “No taxation without representation” summarizes that historically, democratic rights flow from economic power. But this might work in reverse once we’re all on UBI: No representation without taxation!
bsky.app/profile/econ...
19.09.2025 21:04 —
👍 8
🔁 2
💬 3
📌 0
It's fair to say that people have wrongly predictive massive permanent unemployment before and been wrong. But our piece is asking what happens when everyone actually does become permanently unemployable.
19.09.2025 21:02 —
👍 0
🔁 0
💬 0
📌 0
I agree. I was just reading a LessWrong comment making a similar point:
"Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life."
www.lesswrong.com/posts/onsZ4J...
10.07.2025 15:09 —
👍 4
🔁 0
💬 0
📌 0
It’ll be co-located with ICML. Our workshop is a separate event, so no need to register for ICML to attend ours! Ours is free but invite-only, please apply on our site:
www.post-agi.org
Co-organized with Raymond Douglas, Nora Ammann,
@kulveit.bsky.social, and @davidskrueger.bsky.social
18.06.2025 18:12 —
👍 4
🔁 0
💬 0
📌 0
- Are there multiple, qualitatively different basins of attraction of future civilizations?
- Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?
- What empirical evidence could help us tell which trajectory we’re on?
18.06.2025 18:12 —
👍 4
🔁 0
💬 1
📌 0
Some empirical questions we hope to discuss:
- Could alignment of single AIs to single humans be sufficient to solve global coordination problems?
- Will agency tend to operate at ever-larger scales, multiple scales, or something else?
18.06.2025 18:12 —
👍 1
🔁 0
💬 1
📌 0
Some concrete topics we hope to address:
-What future trajectories are plausible?
-What mechanisms could support long-term legacies?
-New theories of agency, power, and social dynamics.
-AI representatives and new coordination mechanisms.
-How will AI alter cultural evolution?
18.06.2025 18:12 —
👍 1
🔁 0
💬 1
📌 0
And Anna Yelizarov, @fbarez.bsky.social, @scasper.bsky.social, Beatrice Erkers, among others.
We'll draw from political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency.
18.06.2025 18:12 —
👍 3
🔁 1
💬 1
📌 0
Post-AGI Civilizational Equilibria Workshop | Vancouver 2025
Are there any good ones? Join us in Vancouver on July 14th, 2025 to explore stable equilibria and human agency in a post-AGI world. Co-located with ICML.
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!
Post-AGI Civilizational Equilibria: Are there any good ones?
Vancouver, July 14th
www.post-agi.org
Featuring: Joe Carlsmith, @richardngo.bsky.social, Emmett Shear ... 🧵
18.06.2025 18:12 —
👍 10
🔁 3
💬 2
📌 0
Thanks for explaining, but I'm still confused. LLMs succeed regularly at following complex natural-language instructions without examples - it's their bread and butter. I agree they sometimes have problems executing algorithms consistently (unless fine-tuned to do so), but so do untrained humans.
17.06.2025 18:39 —
👍 3
🔁 0
💬 1
📌 0
"only those individuals who explicitly understood a task (via a natural language explanation) reached a correct solution whereas implicit trial and error reinforcement failed to converge. This ... has yet to be demonstrated in an LLM."
Is this claiming LLMs haven't been shown to benefit from hints?
16.06.2025 17:58 —
👍 2
🔁 0
💬 2
📌 0
Thanks for clarifying. I agree that singulatarian scenarios can be naive, breathless, and simplistic. But this piece seems to me to overstate its case if it's plausible AI will make most humans unemployable. I'd love to hear your thoughts about life after most work is automated, if you have time.
14.06.2025 01:26 —
👍 0
🔁 0
💬 0
📌 0