Forethought's Avatar

Forethought

@forethought-org.bsky.social

Research nonprofit exploring how to navigate explosive AI progress. forethought.org

48 Followers  |  2 Following  |  21 Posts  |  Joined: 07.03.2025  |  1.9465

Latest posts by forethought-org.bsky.social on Bluesky

Preview
Politics and Power Post-Automation (with David Duvenaud) David Duvenaud is an associate professor at the University of Toronto. He recently organised the workshop on ‘Post-AGI Civilizational Equilibria’ , and he is a co-author of ‘Gradual Disempowerment’. He recently finished an extended sabbatical on the Alignm

What might happen to society and politics after widespread automation? What are the best ideas for good post-AGI futures, if any?

David Duvenaud joins the podcast —

pnc.st/s/forecast/...

25.09.2025 11:51 — 👍 1    🔁 0    💬 0    📌 0
Preview
Is Gradual Disempowerment Inevitable? (with Raymond Douglas) Raymond Douglas is a researcher focused on the societal effects of AI. In this episode, we discuss Gradual Disempowerment. To see all our published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

How could humans lose control over the future, even if AIs don't coordinate to seek power? What can we do about that?

Raymond Douglas joins the podcast to discuss “Gradual Disempowerment”

Listen: pnc.st/s/forecast/...

09.09.2025 11:25 — 👍 2    🔁 0    💬 0    📌 0
Preview
Should AI Agents Obey Human Laws? (with Cullen O'Keefe) Cullen O'Keefe is Director of Research at the Institute for Law & AI. In this episode, we discuss 'Law-Following AI: designing AI agents to obey human laws'. To see all our published research, visit forethought.org/research. To subscribe to our ne

Should AI agents obey human laws?

Cullen O’Keefe (Institute for Law & AI) joins the podcast to discuss “law-following AI”.

Listen: pnc.st/s/forecast/...

28.08.2025 10:30 — 👍 0    🔁 0    💬 0    📌 0
Preview
The Basic Case for Better Futures: SF Model Analysis Forethought SF model shows flourishing work has greater scale than survival: 100x impact difference.

Read it here:

www.forethought.org/research/su...

28.08.2025 08:31 — 👍 0    🔁 0    💬 0    📌 0
Post image

The ‘Better Futures’ series compares the value of working on ‘survival’ and ‘flourishing’.

In ‘The Basic Case for Better Futures’, Will MacAskill and Philip Trammell describe a more formal way to model the future in those terms.

28.08.2025 08:31 — 👍 1    🔁 0    💬 1    📌 0
Preview
Subscribe We are a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.

You can find this and future article narrations wherever you listen to podcasts: www.forethought.org/subscribe#p...

26.08.2025 10:58 — 👍 0    🔁 0    💬 0    📌 0
Post image

We're starting to post narrations of Forethought articles on our podcast feed, for people who’d prefer to listen to them.

First up is ‘AI-Enabled Coups: How a Small Group Could Use AI to Seize Power’.

26.08.2025 10:58 — 👍 0    🔁 0    💬 1    📌 0
Preview
How to Make the Future Better: Concrete Actions for Flourishing Forethought outlines concrete actions for better futures: prevent post-AGI autocracy, improve AI governance.

In the fifth essay in the ‘Better Futures’ series, Will MacAskill asks what, concretely, we could do to improve the value of the future (conditional on survival).

Read it here: www.forethought.org/research/ho...

26.08.2025 09:04 — 👍 1    🔁 0    💬 0    📌 0
Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast
Will MacAskill discusses his new research series ‘Better Futures’.➡️ Read the papers at forethought.org/research/better-futures.00:00:00 Intro00:02:38 Surviv... Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast

Full episode:

www.youtube.com/watch?v=UMF...

24.08.2025 13:11 — 👍 0    🔁 0    💬 0    📌 0
Video thumbnail

One reason to think the coming century could be pivotal is that humanity might soon race through a big fraction of what's still unexplored of the eventual tech tree.

From the podcast on ‘Better Futures’ —

24.08.2025 13:11 — 👍 0    🔁 0    💬 1    📌 0
Preview
Persistent Path-Dependence: Why Our Actions Matter Long-Term Forethought argues against the "wash out" objection: AGI-enforced institutions enable persistent impact.

The fourth entry in the ‘Better Futures’ series asks whether the effects of our actions today inevitably ‘wash out’ over long time horizons, aside from extinction. Will MacAskill argues against that view.

Read it here: www.forethought.org/research/pe...

22.08.2025 09:01 — 👍 1    🔁 0    💬 0    📌 0
Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast
Will MacAskill discusses his new research series ‘Better Futures’.➡️ Read the papers at forethought.org/research/better-futures.00:00:00 Intro00:02:38 Surviv... Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast

Full episode:

www.youtube.com/watch?v=UMF...

21.08.2025 13:20 — 👍 0    🔁 0    💬 0    📌 0
Video thumbnail

What is the difference between “survival” and “flourishing”?

Will MacAskill on the better futures model, from our first video podcast:

21.08.2025 13:20 — 👍 0    🔁 0    💬 1    📌 0
AI Rights for Human Safety (with Peter Salib and Simon Goldstein) Peter Salib is an assistant professor of law at the University of Houston, and Simon Goldstein is an associate professor of philosophy at the University of Hong Kong. We discuss their paper ‘AI Rights for Human Safety’. To see all our published research,

New podcast episode with Peter Salib and Simon Goldstein on their article ‘AI Rights for Human Safety’.

pnc.st/s/forecast/...

09.07.2025 19:18 — 👍 1    🔁 0    💬 0    📌 0
Inference Scaling, AI Agents, and Moratoria (with Toby Ord) Toby Ord is a Senior Researcher at Oxford University. We discuss the ‘scaling paradox’, inference scaling and its implications, ways to interpret trends in the length of tasks AI agents can complete, and some unpublished thoughts on lessons from scientifi

New podcast episode with @tobyord.bsky.social — on inference scaling, time horizons for AI agents, lessons from scientific moratoria, and more.

pnc.st/s/forecast/...

16.06.2025 10:36 — 👍 4    🔁 1    💬 0    📌 0
Post image

New report: “Will AI R&D Automation Cause a Software Intelligence Explosion?” 

As AI R&D is automated, AI progress may dramatically accelerate. Skeptics counter that hardware stock can only grow so fast. But what if software advances alone can sustain acceleration?

x.com/daniel_2718...

26.03.2025 18:27 — 👍 0    🔁 0    💬 0    📌 0

Today we’re putting out our first paper, which gives an overview of these challenges. Read it here: www.forethought.org/research/pr...

11.03.2025 15:36 — 👍 1    🔁 0    💬 0    📌 1

At Forethought, we’re doing research to help us understand the opportunities and challenges that AI-driven technological change will bring, and to help us figure out what we can do, now, to prepare.

11.03.2025 15:36 — 👍 2    🔁 0    💬 1    📌 0

And this might happen blisteringly fast – our analysis suggests we’re likely to see a century’s worth of technological progress in less than a decade. Our current institutions were not designed for such rapid change. We need to prepare in advance.

11.03.2025 15:35 — 👍 1    🔁 0    💬 1    📌 0

AI might help us to create many new technologies, and with them new opportunities – from economic abundance to enhanced collective decision-making – and new challenges – from extreme concentration of power to new weapons of mass destruction.

11.03.2025 15:35 — 👍 1    🔁 0    💬 1    📌 0
Post image

Two years ago, AI systems were close to random guessing at PhD-level science questions. Now they beat human experts. As they continue to become smarter and more agentic, they may begin to significantly accelerate technological development. What happens next?

11.03.2025 15:35 — 👍 4    🔁 0    💬 1    📌 0

@forethought-org is following 2 prominent accounts