The case study of Intelsat is here:
www.forethought.org/research/in...
@wdmacaskill.bsky.social
The case study of Intelsat is here:
www.forethought.org/research/in...
The risk from AI-enabled coups in particular is detailed at length in:
www.forethought.org/research/ai...
Video podcast here:
www.youtube.com/watch?v=UMF...
Essay here:
www.forethought.org/research/ho...
But my hope is that work on these areas - taking them from inchoate to tractable - could help equip decision-maker with the clarity and incentives needed to build a flourishing, rather than a merely surviving, future.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0I’m aware that there’s a lot of different ideas here, and I’m aware that these are just potential ideas - more like proof of concept, rather than fully-fleshed out proposals.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0We could try to build and widely deploy AI tools for fact-checking, forecasting, policy advice, macrostrategy research and coordination; this could help ensure that the most crucial decisions are made as wisely as possible.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Finally, deliberative AI. AI has the potential to be enormously beneficial for our ability to think clearly and make good decisions, both individually and collectively. (And, yes, has the ability to be enormously destructive here, too.)
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0We’re currently stumbling blind into one of the most momentous decisions that will ever be made.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0In the future, it’s very likely that almost all beings will be digital. The first legal decisions we make here could set precedent for how they’re treated. But there are huge unresolved questions about what a good society involving both human beings and superintelligent AIs would look like.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0But what rights are appropriate? An AI rights regime will affect many things: the risk of AI takeover; the extent to which AI decision-making guides society; and the wellbeing of AIs themselves, if and when they become conscious.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Fifth, AI rights. Even just for the mundane reasons that it will be economically useful to give AIs rights to make contracts (etc), as we do with corporations, I think it’s likely we’ll start soon giving AIs at least some rights.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0(And, though it might be more difficult to achieve, we can also try to ensure that, even if superintelligent AI does take over, it (i) treats humans well, and (ii) creates a more-flourishing AI-civilisation than it would have done otherwise.)
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Instead, we should at least want them to nudge us to act in accordance with the better angels of our nature.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0I think we want AI advisors that aren’t sycophants, and aren’t merely trying to fulfill their users’ narrow self-interest - at least in the highest-stakes situations, like AI for political advice.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0That is, we need to figure out the “model spec” for superintelligence - what character it should have - and how to ensure it has that character.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Fourth, working on AI value-alignment. Though corrigibility and control are important to reduce takeover risk, we want to also focus on ensuring that the AI we create positively influences society in the worlds where it doesn’t take over.
15.08.2025 15:46 — 👍 3 🔁 0 💬 1 📌 0Or, assuming the current “commons” regime breaks down given how valuable space resources will become, we could try to figure out in advance what a good alternative regime for allocating space resources might look like.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Here, we could try to prevent lock-in, by pushing for international understanding of the Outer Space Treaty such that de facto grabs of space resources (“seizers keepers”) are clearly illegal.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0(ii) almost all the resources that can ever be used are outside of our solar system, so decisions about who owns these resources are decisions about almost everything that will ever happen.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Third, deep space governance.
This is crucial for two reasons: (i) the acquisition of resources within our solar system is a way in which one country or company could get more power than the rest of the world combined, and
Intelsat gives an illustration: it was created under “interim agreements”; after five years, negotiations began for “definitive agreements”, which came into force four years after that. The fact that the initial agreements were only temporary helped get non-US countries on board.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0What’s more, for any new major institutions like this, I think we should make their governance explicitly temporary: coming with reauthorization clauses, explicitly stating that the law or institution must be reauthorized after some period of time.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Rose Hadshar and I give a potential model: Intelsat, a successful US-led multilateral project to build the world’s first global communications satellite network.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0We therefore need governance structures—ideally multilateral or at least widely distributed—that can be trusted to reflect global interests, embed checks and balances, and resist drift toward monopoly or dictatorship.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0Second, governance of ASI projects. If there’s a successful national project to build superintelligence, it will wield world-shaping power.
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0and the military (and whole economy) can in principle be aligned to a single person.
To reduce this risk, we can try to introduce constraints on coup-assisting uses of AI, diversify military AI suppliers, slow autocracies via export controls, and promote credible benefit-sharing.
First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else;
15.08.2025 15:46 — 👍 1 🔁 0 💬 1 📌 0- Working on AI value-alignment; figuring out what character AI should have
- Developing a regime of AI rights
- Improving AI for reasoning, coordination and decision-making.
Here’s an overview.
These areas include:
- Preventing post-AGI autocracy
- Improving the governance of projects to build superintelligence
- Deep space governance