MIRI's Avatar

MIRI

@intelligence.org.bsky.social

For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence.

32 Followers  |  5 Following  |  12 Posts  |  Joined: 25.11.2024  |  1.6271

Latest posts by intelligence.org on Bluesky

Post image

We've been getting some pretty awesome blurbs for Eliezer and Nate's forthcoming book: If Anyone Builds It, Everyone Dies

More details here: www.lesswrong.com/posts/khmpWJ...

One of my favorite reactions, from someone who works on AI policy in DC:

19.06.2025 20:52 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Superintelligent AI - Our Best or Worst Idea?
YouTube video by Win-Win with Liv Boeree Superintelligent AI - Our Best or Worst Idea?

Really enjoyed chatting with @anthonyaguirre.bsky.social, @livboeree.bsky.social, and the folks who came out for the Win-Win podcast's second-ever IRL event in Austin. Great audience with lots of good and tough questions.

Thanks for putting it on!

www.youtube.com/watch?v=XWZg...

21.05.2025 21:27 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
If Anyone Builds It, Everyone Dies The scramble to create superhuman AI has put us on the path to extinctionβ€”but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity.

πŸ“’ Announcing IF ANYONE BUILDS IT, EVERYONE DIES

A new book from MIRI co-founder Eliezer Yudkowsky and president Nate Soares, published by @littlebrown.bsky.social.

πŸ—“οΈ Out September 16, 2025

Visit the website to learn more and preorder the hardcover, ebook, or audiobook.

14.05.2025 16:59 β€” πŸ‘ 14    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0

Big thanks to @pbarnett.bsky.social, @aaronscher.bsky.social, and the rest of our TechGov team at MIRI for their hard work putting this together, as well as the huge number of folks who read the earlier drafts and provided thoughtful feedback.

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions β€” MIRI Technical Governance Team This AI governance research agenda lays out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and e...

Given the danger down all the other paths, we recommend the world build the capacity to collectively stop dangerous AI activities. However, it’s worth preparing for other scenarios. See the agenda for hundreds of research questions we want answered! 10/10
techgov.intelligence.org/research/ai-...

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In another scenario, described in Superintelligence Strategy, nations keep each other’s AI development in check by threatening to sabotage any destabilizing AI progress. However, visibility and sabotage capability may not be good enough, so this regime may not be stable. 9/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Alternatively, the US government may largely leave the development of advanced AI to companies. This risks proliferating dangerous AI capabilities to malicious actors, faces similar risks to the US National Project, and overall seems extremely unstable. 8/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Another scenario we explore is a US National Projectβ€”the US races to build superintelligence, with the goal of achieving a decisive strategic advantage globally. This risks both loss of control to AI and increased geopolitical conflict, including war. 7/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We focus on an off switch since we believe halting frontier AI development will be crucial to prevent loss of control. We think skeptics of loss of control should value building an off switch, since it would be a valuable tool to reduce dual-use/misuse risks, among others. 6/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our favored scenario involves building the technical, legal, and institutional infrastructure required to internationally restrict dangerous AI development and deployment, preserving optionality for the future. We refer to this as an β€œoff switch.” 5/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In the research agenda, we lay out four scenarios for the geopolitical response to advanced AI in the coming years. For each scenario, we lay out research questions that, if answered, would provide important insight on how to successfully reduce catastrophic and extinction risks. 4/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The current trajectory of AI development looks pretty rough, likely resulting in catastrophe. As AI becomes more capable, we will face risks of loss of control, human misuse, geopolitical conflict, and authoritarian lock-in. 3/10

01.05.2025 22:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Most people don’t seem to understand how wild the coming few years could be. AI development, as fast as it is now, could quickly accelerate due to automation of AI R&D. Many actors, including governments, may think that if they control AI, they control the future. 2/10

01.05.2025 22:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New AI governance research agenda from MIRI’s TechGov Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. 🧡1/10
techgov.intelligence.org/research/ai-...

01.05.2025 22:28 β€” πŸ‘ 13    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0
Post image

MIRI's (@intelligence.org) Technical Governance Team submitted a comment on the AI Action Plan.

Great work by David Abecassis, @pbarnett.bsky.social, and @aaronscher.bsky.social

Check it out here: techgov.intelligence.org/research/res...

18.03.2025 22:38 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

@intelligence.org is following 5 prominent accounts