As AI continues to be adopted in national security and defence contexts, the rise of gen AI agents poses questions regarding both their cyber capabilities, and the novel attack vectors inherent to their use that may impede military operations. Excited to work with Boyan to assess exactly this!
02.12.2025 20:14 — 👍 1 🔁 0 💬 0 📌 0
autonomous vehicles, weapons, and use of AI in other safety-critical systems like nuclear. To find out that some 23 year old with a history degree "proved" why AI is safe and nuclear regulation is "bad" on a substack, then that being adopted by gov. Would take a lifetime to dispel these claims.
01.12.2025 19:03 — 👍 5 🔁 0 💬 0 📌 0
We don't talk enough about how our governments are captured by a bunch of X shitposters with substacks who "prove" things by pointing to cherry-picked or disproven corporate claims while yelling "abundance" and "build more!" with not an ounce of expertise. Especially with ...
01.12.2025 19:03 — 👍 7 🔁 0 💬 1 📌 0
If you've spoken to any western military personnel, this has been known for quite sometime. Unsurprising given the track record of Oculus within the military. This is the outcome when defense contractors, especially those selling "AI", grade their own homework.
01.12.2025 16:08 — 👍 20 🔁 3 💬 0 📌 0
US navy accused of cover-up over dangerous plutonium in San Francisco
Advocates allege navy knew levels of airborne plutonium at Hunters Point shipyard were high before it alerted officials
And this is also exactly why the deference of nuclear regulation and oversight from the NRC to the DOD is particularly dangerous. These are political and partial actors who do not have public safety in mind.
www.theguardian.com/us-news/2025...
29.11.2025 18:25 — 👍 6 🔁 0 💬 0 📌 0
The Death that Keeps on Going
How much can one village physically take? The worst-case scenario has already happened countless times in the small West Bank community of Umm al-Khair. It happened when prominent Palestinian…
As the world shifts its gaze away from Palestine, the series stands as documentation of the continued ethnic cleansing and offers testimonies and stories from those facing displacement, homelessness and violence from settlers.
27.11.2025 18:02 — 👍 13 🔁 6 💬 0 📌 0
It’s all happening in NYC! Taking real steps towards a politics of hope that's already diffusing far beyond this city and country: I couldn't be more excited to serve on Mayor-Elect Zohran Mamdani's wide-ranging and hugely inspiring transition team: www.cbsnews.com/newyork/news...
25.11.2025 21:30 — 👍 13 🔁 2 💬 1 📌 0
Despite warnings in our report, today's release of the UK Nuclear Regulatory Review is littered with unsubstantiated claims and recommendations touting AI "as a powerful tool" and "cost-effective" to be used for safety and licensing without noted risks or caveats. This trend has now reached the UK.
24.11.2025 13:16 — 👍 5 🔁 3 💬 1 📌 0
I've said it before and I will say it again. There is no way to secure a system when it's potential attack surface is *all of language*
20.11.2025 17:23 — 👍 59 🔁 16 💬 2 📌 1
Hi Nina, both Sofia and I are actually experts in nuclear safety and work on nuclear power. I recommend reading the report as there is no fear mongering regarding nuclear power.
14.11.2025 19:02 — 👍 2 🔁 0 💬 1 📌 0
Tech companies are betting big on nuclear energy to meet AI’s massive power demands—and Trump’s done a lot to make it easier for them. Heidy Khlaaf, the head AI scientist at the AI Now Institute, tells us why that’s dangerous.
@mjgault.bsky.social has the story:
www.404media.co/power-compan...
14.11.2025 18:57 — 👍 131 🔁 69 💬 7 📌 10
Great coverage by @mjgault.bsky.social on our report and what's at stake and what could go wrong in using AI in an attempt to accelerate nuclear development. Read our report here: ainowinstitute.org/publications...
14.11.2025 18:52 — 👍 6 🔁 1 💬 0 📌 0
This fast-tracking approach comes alongside efforts from many of these AI companies themselves to apply unproven AI systems to speed the pace of licensing/regulation. It also forms the core of a new report from the @ainowinstitute.bsky.social @heidykhlaaf.bsky.social
14.11.2025 15:51 — 👍 2 🔁 1 💬 0 📌 0
A.I. Goes Nuclear!
OpenAI, Google, and Microsoft are betting big on nuclear energy to power their A.I. data centers. But weakened regulations may create risks, nuclear safety experts warn.
New: I wrote about the nuclear push coming from an energy-constrained AI industry, something that’s importantly coupled with an increasingly de-regulatory environment — often mirroring the language coming directly from these corporations — coming from the White House.
puck.news/ais-nuclear-...
14.11.2025 15:51 — 👍 2 🔁 2 💬 1 📌 0
Opinion | You May Already Be Bailing Out the AI Business
Washington is treating the industry as if it’s too big to fail, even as the market sends lukewarm signals.
This week Open AI walked back a call for the govt to backstop financing for its trillion dollar investments in data centers. This was only the tip of the iceberg; a slow bailout for AI firms is already underway. Read more from @ambakak.bsky.social and I in @wsj.com: www.wsj.com/opinion/you-...
12.11.2025 22:56 — 👍 163 🔁 75 💬 4 📌 29
Thank you for your kind words! The irony is that they're using the Cold war analogy to roll back the very thresholds established in that period.
12.11.2025 12:08 — 👍 1 🔁 1 💬 1 📌 0
Despite safety and proliferation risks, both AI labs and governments continue to execute these initiatives through the positioning of nuclear infrastructure as an extension of AI infrastructure in service of the purported “AI Arms race”. A risky shortcut with catastrophic consequences.
12.11.2025 11:07 — 👍 3 🔁 0 💬 0 📌 0
Fission for Algorithms: The Undermining of Nuclear Regulation in Service of AI - AI Now Institute
A report examining nuclear “fast-tracking” initiatives on their feasibility and their impact on nuclear safety, security, and safeguards.
New Report: Fission for Algorithms. We draw on our nuclear expertise to dissect the risky fast-tracking initiatives hastening nuclear development in service of AI. This includes proposals to use Gen AI for nuclear licensing, whilst lowering well-established nuclear thresholds.
12.11.2025 11:05 — 👍 20 🔁 8 💬 2 📌 3
"Rafael purchased AI technologies made available through AWS, including the state-of-the-art large language model Claude ... The materials reviewed also indicate Amazon sold cloud-computing services to Israel’s nuclear program and offices administering the West Bank"
25.10.2025 20:24 — 👍 9 🔁 7 💬 0 📌 0
Yes we've actually engaged with your colleagues prior! Planned on reaching out during paper release and would be happy to include you in the thread.
21.10.2025 15:47 — 👍 2 🔁 0 💬 0 📌 0
We have a paper on this soon and will share once it's out!
21.10.2025 14:49 — 👍 1 🔁 0 💬 1 📌 0
There is an issue with this dichotomy that places this "solution" as sufficient when it's far from that, and ultimately doesn't produce anything of value. The approach of "doing something is better than nothing" distracts from the risks present with AI having access to nuclear secrets, for example.
21.10.2025 10:03 — 👍 1 🔁 0 💬 1 📌 0
I spoke to @mjgault.bsky.social in WIRED on what I ultimately view as safety theatre for "nuclear safeguarding" and how it distracts from the real risk of unregulated private corporations having access incredibly sensitive nuclear secrets given their insecure AI models.
21.10.2025 09:58 — 👍 16 🔁 5 💬 0 📌 0
OpenAI, Anthropic & others have shifted from championing ethics to signing $200M+ defense contracts that embed gen AI into high-risk military systems. In @theverge.com, @heidykhlaaf.bsky.social explains why the move toward defense partnerships is a safety risk.
Listen here: shorturl.at/mTAZq
30.09.2025 18:19 — 👍 13 🔁 2 💬 1 📌 2
I'm sorry what? We're now back to British colonial officers in British Mandate Palestine who hold “supreme political and legal authority”?
26.09.2025 08:57 — 👍 5 🔁 0 💬 1 📌 0
How AI safety took a backseat to military money
Heidy Khlaff discusses the industry shift toward military applications and what it means for AI safety.
It was great speaking to @haydenfield.bsky.social on the Decoder podcast, where we discussed the relationship between safety and the push for generative AI to be deployed within military applications.
www.theverge.com/podcast/7848...
25.09.2025 16:38 — 👍 14 🔁 7 💬 2 📌 2
"The State of Israel has committed genocide."
Navi Pillay, chair of the UN Independent International Commission of Inquiry on the Occupied Palestinian Territory, has told Al Jazeera that Israel’s war on Gaza is a genocide.
16.09.2025 07:57 — 👍 458 🔁 230 💬 18 📌 15
Director, Global Risk at Federation of American Scientists, former Sr. Dir NSC, Bulletin of Atomic Scientists (doomsday clock setter). Pope of Chili Town. No #manels, Yankees/Gunners.
Independent news on politics and war
dropsitenews.com
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
Human rights & democracy advocate in 🇺🇸 & the 🌍. Privacy & data @ CDT. She/her. Opinions personal, etc. ❌👑
Director of IT and Digital Strategy at @whiteheadinstitute.bsky.social, MIT. PhD in CS. Turning data into discoveries. Fan of practical tech that works. Tea drinker, research enabler, innovation chaser. Views are my own. He/him.
ceo and cofounder of @germnetwork.com, the secure messenger for the social internet. san franciscan today, chicagoan 4eva 🌱
these skeets delete
Germ DM me 🔑
https://ger.mx/A53xu7cyweS-R5l3GdDawq8007L96gI4bL-KTFv0kBMz#did:plc:ad4m72ykh2evfdqen3qowxmg
Nukes, weird tech, and conflict at @404media.co Host of angryplanetpod.com
Tips: matthew@404media.co // signal: 347 762-9212
Epidemiologist/mathematician. Professor at London School of Hygiene & Tropical Medicine. Author of The Rules of Contagion and The Perfect Bet. Views own.
New book Proof: The Uncertain Science of Certainty available now: proof.kucharski.io
EIC at IBM Research. Longtime journalist.
Executive Director @ Stop Killer Robots
#TeamHuman
Live in Chicago, write for Wired, got a great attitude
Send me tips: kate_knibbs@wired.com / Signal: kateknibbs.09
Investigative journalist. New York Times Contributing Opinion Writer. Founder, Proof News, The Markup. Priors: ProPublica, WSJ. Fellow at Harvard Shorenstein Center. Signal: Julia.368
Sign up for my newsletter: https://buttondown.com/JuliaAngwin
Political correspondent @zeteo.com
Any tips or info you’d like to share?
Signal premthakker.35
Prem@zeteonews.com
https://linktr.ee/premthakker
Information Retrieval Researcher ✊🏽🍉🕊️ | Pronouns: He/him
Homepage: https://bhaskar-mitra.github.io/
I co-direct @AINowInstitute; @Signalapp board. Cover art by my mother https://shoumabanerjeekak.com/about-the-artist/
Palestinian-American Filmmaker
https://www.lexi-alexander.com
#AlJazeeraEnglish, we focus on people and events that affect people's lives. We bring topics to light that often go under-reported, listening to all sides of the story and giving a 'voice to the voiceless.'
#newstoday
Historian of revolution, pessimistic anarchist. Nobody else wants these views, trust me. 'His real face is a hat' - Fern Riddell.