Computer security experts, national security professionals, and Grimes come together to say: "If Anyone Builds It, Everyone Dies" is WEIRDLY GOOD.
I'm dying to hear people's reactions to the book; please spam me with your thoughts as you read it.
17.09.2025 22:50 β π 7 π 2 π¬ 0 π 0
Serious slate of endorsements here!
21.06.2025 21:51 β π 3 π 1 π¬ 0 π 0
Senior White House officials, a retired three-star general, a Nobel laureate, and others come out to say that you should probably read Eliezer Yudkowsky and Nate Soares' "If Anyone Builds It, Everyone Dies".
21.06.2025 17:40 β π 17 π 6 π¬ 1 π 1
We've been getting some pretty awesome blurbs for Eliezer and Nate's forthcoming book: If Anyone Builds It, Everyone Dies
More details here: www.lesswrong.com/posts/khmpWJ...
One of my favorite reactions, from someone who works on AI policy in DC:
19.06.2025 20:52 β π 8 π 2 π¬ 1 π 0
If Anyone Builds It, Everyone Dies
The scramble to create superhuman AI has put us on the path to extinctionβbut it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity.
π’ Announcing IF ANYONE BUILDS IT, EVERYONE DIES
A new book from MIRI co-founder Eliezer Yudkowsky and president Nate Soares, published by @littlebrown.bsky.social.
ποΈ Out September 16, 2025
Visit the website to learn more and preorder the hardcover, ebook, or audiobook.
14.05.2025 16:59 β π 18 π 8 π¬ 1 π 0
New AI governance research agenda from MIRIβs TechGov Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. π§΅1/10
techgov.intelligence.org/research/ai-...
01.05.2025 22:28 β π 13 π 5 π¬ 2 π 0
Writes too many words & talks too much. Storyteller, writer, VGM flutist & singer, newbie beader, Twitch affiliate, Texan-Canadian. Leader of the Returners. Marketer by trade, artist at heart. π³οΈβπ
Twitch.tv/LaurentheFlute
For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence.
CEO at Machine Intelligence Research Institute (MIRI, @intelligence.org)
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social⬠(opinions my own). Author of Rising Tide on substack: helentoner.substack.com
http://maxharms.com
Author of Crystal Society.
Research scientist in AI alignment at Google DeepMind. Co-founder of Future of Life Institute. Views are my own and do not represent GDM or FLI.
π friendship π₯° love π©ββ€οΈβπ© queer π polyamoryπͺ’kink π hypnotherapist π©ββ€οΈβπβπ© relationship coach π§parts work π§ neurodiversity πΉtrauma healingπ
She/her
Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe.
https://pauseai.info
You have 80,000 hours in your career.
This makes it your best opportunity to have a positive impact on the world.
https://80000hours.org
Researching Artificial General Intelligence Safety, via thinking about neuroscience and algorithms, at Astera Institute. https://sjbyrnes.com/agi.html
I run The Update newsletter: www.update.news
Book: academic.oup.com/book/56384
π
FKA Ophelia
Surfacing now and then.
On Mid-Atlantic crip time
π³οΈβππ₯
Claude says I process my emotions externally.
Senior Data Scientist, Blue Rose Research. Futurist, neophile, machine explorer. alyssamvance@gmail.com