I'm not sure I understand - are you saying these claims should be censored within academia or not? It sounds like you're saying they shouldn't be protected, which would amount to censorship imo.
A new paper co-authored by SRI Chair @davidduvenaud.bsky.social examines “gradual disempowerment”: how incremental AI deployment could steadily reduce human influence over the economy, culture, and the state—without a single abrupt takeover.
80000Hours feature: 80000hours.org/podcast/epis...
My interview with Rob Wilblin on Gradual Disempowerment is up: www.youtube.com/watch?v=XV3e...
I make the case that, even if we solve the technical problem of aligning powerful AIs, that our institutions, culture, and governments will serve us less well once we're all drags on growth.
Makes sense, thanks for elaborating.
Your claim is correct, but that seems like a pretty contrived example, no?
That's great. But do you have any idea of the magnitude of change in odds in various circumstances? Surely this was examined by public health people?
I'm happy for you. How much difference do you think it makes in your reinfection odds whether other people mask?
This workshop follows one we ran in July, adding optional specialized talks, and light moderation in the breakout sessions. To see how that one went, and videos of the talks, see this thread:
www.lesswrong.com/posts/csdn3e...
It’ll be co-located with NeurIPS. Our workshop is a separate event, so no need to register for NeurIPS to attend ours! Ours is free but invite-only, please apply here:
forms.gle/xcfgBNmaP7Wk...
Co-organized with @kulveit.bsky.social @scasper.bsky.social Raymond Douglas, and Maria Kostylew
Iason Gabriel of Google Deepmind on Resisting Disempowerment
Atoosa Kasirzadeh of CMU on "Taking post-AGI human power seriously"
Deger Turan, CEO of Metaculus on "Concrete Mechanisms for Slow Loss of Control"
Beren Millidge of Zyphra on "When does competition lead to recognisable values?"
Anna Yelizarova of Windfall Trust on "What would UBI actually entail?"
Ivan Vendrov of Midjourney on "Supercooperation as an alternative to Superintelligence"
The draft program features:
Anton Korinek on the Economics of Transformative AI
Alex Tamkin of Anthropic on "The fractal nature of automation vs. augmentation"
Anders Sandberg on "Cyborg Leviathans and Human Niche Construction"
How might the world look after the development of AGI, and what should we do about it now? Help us think about this at our workshop on Post-AGI Economics, Culture and Governance!
We’ll host speakers from political theory, economics, mechanism design, history, and hierarchical agency.
post-agi.org
What's the difference, in your view?
More generally, we worry that liberalism itself is under threat - that the positive-sum-ness of laissez-faire governance won’t hold when citizens are mostly fighting over UBI. We hope we’re wrong!
“So far, we humans have been steering our civilisation on easy mode—wherever people went, they were indispensable. Now we have to hit a dauntingly narrow target: to create a civilisation that will care for us indefinitely—even when it doesn’t need us.”
“The average North Korean farmer has almost no power over the state, but they are still useful. The state can’t function unless it feeds its citizens. In an era of general automation, even this minimal duty of care will go.”
“The right to vote is the most visible sign of human influence over the state. But consider all the other levers of influence that come from economic power, such as lobbying, protesting and striking, which would also be eroded by mass automation.”
Some highlights:
“Democracies are still quite young, and were made possible only by technologies that made liberal, pluralistic societies globally competitive. We’re fortunate to have lived through this great confluence of human flourishing and state power, but we can’t take it for granted.”
Me and Raymond Douglas on how AI job loss could hurt democracy. “No taxation without representation” summarizes that historically, democratic rights flow from economic power. But this might work in reverse once we’re all on UBI: No representation without taxation!
bsky.app/profile/econ...
It's fair to say that people have wrongly predictive massive permanent unemployment before and been wrong. But our piece is asking what happens when everyone actually does become permanently unemployable.
I agree. I was just reading a LessWrong comment making a similar point:
"Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life."
www.lesswrong.com/posts/onsZ4J...
It’ll be co-located with ICML. Our workshop is a separate event, so no need to register for ICML to attend ours! Ours is free but invite-only, please apply on our site:
www.post-agi.org
Co-organized with Raymond Douglas, Nora Ammann,
@kulveit.bsky.social, and @davidskrueger.bsky.social
- Are there multiple, qualitatively different basins of attraction of future civilizations?
- Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?
- What empirical evidence could help us tell which trajectory we’re on?
Some empirical questions we hope to discuss:
- Could alignment of single AIs to single humans be sufficient to solve global coordination problems?
- Will agency tend to operate at ever-larger scales, multiple scales, or something else?
Some concrete topics we hope to address:
-What future trajectories are plausible?
-What mechanisms could support long-term legacies?
-New theories of agency, power, and social dynamics.
-AI representatives and new coordination mechanisms.
-How will AI alter cultural evolution?
And Anna Yelizarov, @fbarez.bsky.social, @scasper.bsky.social, Beatrice Erkers, among others.
We'll draw from political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency.
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!
Post-AGI Civilizational Equilibria: Are there any good ones?
Vancouver, July 14th
www.post-agi.org
Featuring: Joe Carlsmith, @richardngo.bsky.social, Emmett Shear ... 🧵
Thanks for explaining, but I'm still confused. LLMs succeed regularly at following complex natural-language instructions without examples - it's their bread and butter. I agree they sometimes have problems executing algorithms consistently (unless fine-tuned to do so), but so do untrained humans.
"only those individuals who explicitly understood a task (via a natural language explanation) reached a correct solution whereas implicit trial and error reinforcement failed to converge. This ... has yet to be demonstrated in an LLM."
Is this claiming LLMs haven't been shown to benefit from hints?