's Avatar

@wdmacaskill.bsky.social

625 Followers  |  227 Following  |  187 Posts  |  Joined: 14.11.2024  |  1.949

Latest posts by wdmacaskill.bsky.social on Bluesky

Preview
Intelsat as a Model for International AGI Governance If there is an international project to build artificial general intelligence (“AGI”), how should it be designed? Existing scholarship has looked to historical models for inspiration, often suggesting the Manhattan Project or CERN as the closest analogues. But AGI is a fundamentally general-purpose technology, and is likely to be used primarily for commercial purposes rather than military or scientific ones. This report presents an under-discussed alternative: Intelsat, an international organization founded to establish and own the global satellite communications system. We show that Intelsat is proof of concept that a multilateral project to build a commercially and strategically important technology is possible and can achieve intended objectives—providing major benefits to both the US and its allies compared to the US acting alone. We conclude that ‘Intelsat for AGI’ is a valuable complement to existing models of AGI governance.

The case study of Intelsat is here:
www.forethought.org/research/in...

15.08.2025 15:46 — 👍 0    🔁 0    💬 0    📌 0
Preview
AI-Enabled Coups: How a Small Group Could Use AI to Seize Power The development of AI that is more broadly capable than humans will create a new and serious threat: *AI-enabled coups*. An AI-enabled coup could be staged by a very small group, or just a single person, and could occur even in established democracies. Sufficiently advanced AI will introduce three novel dynamics that significantly increase coup risk. Firstly, military and government leaders could fully replace human personnel with AI systems that are *singularly loyal* to them, eliminating the need to gain human supporters for a coup. Secondly, leaders of AI projects could deliberately build AI systems that are *secretly loyal* to them, for example fully autonomous military robots that pass security tests but later execute a coup when deployed in military settings. Thirdly, senior officials within AI projects or the government could gain *exclusive access* to superhuman capabilities in weapons development, strategic planning, persuasion, and cyber offense, and use these to increase the

The risk from AI-enabled coups in particular is detailed at length in:
www.forethought.org/research/ai...

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0
Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast
Will MacAskill discusses his new research series ‘Better Futures’.➡️ Read the papers at forethought.org/research/better-futures.00:00:00 Intro00:02:38 Surviv... Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast

Video podcast here:
www.youtube.com/watch?v=UMF...

15.08.2025 15:46 — 👍 1    🔁 1    💬 1    📌 0
Preview
How to Make the Future Better: Concrete Actions for Flourishing Forethought outlines concrete actions for better futures: prevent post-AGI autocracy, improve AI governance.

Essay here:
www.forethought.org/research/ho...

15.08.2025 15:46 — 👍 1    🔁 1    💬 1    📌 0

But my hope is that work on these areas - taking them from inchoate to tractable - could help equip decision-maker with the clarity and incentives needed to build a flourishing, rather than a merely surviving, future.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

I’m aware that there’s a lot of different ideas here, and I’m aware that these are just potential ideas - more like proof of concept, rather than fully-fleshed out proposals.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

We could try to build and widely deploy AI tools for fact-checking, forecasting, policy advice, macrostrategy research and coordination; this could help ensure that the most crucial decisions are made as wisely as possible.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Finally, deliberative AI. AI has the potential to be enormously beneficial for our ability to think clearly and make good decisions, both individually and collectively. (And, yes, has the ability to be enormously destructive here, too.)

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

We’re currently stumbling blind into one of the most momentous decisions that will ever be made.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

In the future, it’s very likely that almost all beings will be digital. The first legal decisions we make here could set precedent for how they’re treated. But there are huge unresolved questions about what a good society involving both human beings and superintelligent AIs would look like.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

But what rights are appropriate? An AI rights regime will affect many things: the risk of AI takeover; the extent to which AI decision-making guides society; and the wellbeing of AIs themselves, if and when they become conscious.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Fifth, AI rights. Even just for the mundane reasons that it will be economically useful to give AIs rights to make contracts (etc), as we do with corporations, I think it’s likely we’ll start soon giving AIs at least some rights.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

(And, though it might be more difficult to achieve, we can also try to ensure that, even if superintelligent AI does take over, it (i) treats humans well, and (ii) creates a more-flourishing AI-civilisation than it would have done otherwise.)

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Instead, we should at least want them to nudge us to act in accordance with the better angels of our nature.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

I think we want AI advisors that aren’t sycophants, and aren’t merely trying to fulfill their users’ narrow self-interest - at least in the highest-stakes situations, like AI for political advice.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

That is, we need to figure out the “model spec” for superintelligence - what character it should have - and how to ensure it has that character.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Fourth, working on AI value-alignment. Though corrigibility and control are important to reduce takeover risk, we want to also focus on ensuring that the AI we create positively influences society in the worlds where it doesn’t take over.

15.08.2025 15:46 — 👍 3    🔁 0    💬 1    📌 0

Or, assuming the current “commons” regime breaks down given how valuable space resources will become, we could try to figure out in advance what a good alternative regime for allocating space resources might look like.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Here, we could try to prevent lock-in, by pushing for international understanding of the Outer Space Treaty such that de facto grabs of space resources (“seizers keepers”) are clearly illegal.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

(ii) almost all the resources that can ever be used are outside of our solar system, so decisions about who owns these resources are decisions about almost everything that will ever happen.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Third, deep space governance.

This is crucial for two reasons: (i) the acquisition of resources within our solar system is a way in which one country or company could get more power than the rest of the world combined, and

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Intelsat gives an illustration: it was created under “interim agreements”; after five years, negotiations began for “definitive agreements”, which came into force four years after that. The fact that the initial agreements were only temporary helped get non-US countries on board.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

What’s more, for any new major institutions like this, I think we should make their governance explicitly temporary: coming with reauthorization clauses, explicitly stating that the law or institution must be reauthorized after some period of time.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Rose Hadshar and I give a potential model: Intelsat, a successful US-led multilateral project to build the world’s first global communications satellite network.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

We therefore need governance structures—ideally multilateral or at least widely distributed—that can be trusted to reflect global interests, embed checks and balances, and resist drift toward monopoly or dictatorship.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

Second, governance of ASI projects. If there’s a successful national project to build superintelligence, it will wield world-shaping power.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

and the military (and whole economy) can in principle be aligned to a single person.

To reduce this risk, we can try to introduce constraints on coup-assisting uses of AI, diversify military AI suppliers, slow autocracies via export controls, and promote credible benefit-sharing.

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else;

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

- Working on AI value-alignment; figuring out what character AI should have
- Developing a regime of AI rights
- Improving AI for reasoning, coordination and decision-making.

Here’s an overview.

15.08.2025 15:46 — 👍 3    🔁 0    💬 1    📌 0

These areas include:
- Preventing post-AGI autocracy
- Improving the governance of projects to build superintelligence
- Deep space governance

15.08.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

@wdmacaskill is following 19 prominent accounts