Dr Heidy Khlaaf (هايدي خلاف)'s Avatar

Dr Heidy Khlaaf (هايدي خلاف)

@heidykhlaaf.bsky.social

Climber 🇪🇬 |Chief AI Scientist at @ainowinstitute.bsky.social | Safety engineer (nuclear, software & AI/ML) | TIME 100 AI https://www.heidyk.com/

2,741 Followers  |  199 Following  |  155 Posts  |  Joined: 22.06.2023  |  2.2535

Latest posts by heidykhlaaf.bsky.social on Bluesky

As AI continues to be adopted in national security and defence contexts, the rise of gen AI agents poses questions regarding both their cyber capabilities, and the novel attack vectors inherent to their use that may impede military operations. Excited to work with Boyan to assess exactly this!

02.12.2025 20:14 — 👍 1    🔁 0    💬 0    📌 0

autonomous vehicles, weapons, and use of AI in other safety-critical systems like nuclear. To find out that some 23 year old with a history degree "proved" why AI is safe and nuclear regulation is "bad" on a substack, then that being adopted by gov. Would take a lifetime to dispel these claims.

01.12.2025 19:03 — 👍 5    🔁 0    💬 0    📌 0

We don't talk enough about how our governments are captured by a bunch of X shitposters with substacks who "prove" things by pointing to cherry-picked or disproven corporate claims while yelling "abundance" and "build more!" with not an ounce of expertise. Especially with ...

01.12.2025 19:03 — 👍 7    🔁 0    💬 1    📌 0

If you've spoken to any western military personnel, this has been known for quite sometime. Unsurprising given the track record of Oculus within the military. This is the outcome when defense contractors, especially those selling "AI", grade their own homework.

01.12.2025 16:08 — 👍 20    🔁 3    💬 0    📌 0
Preview
US navy accused of cover-up over dangerous plutonium in San Francisco Advocates allege navy knew levels of airborne plutonium at Hunters Point shipyard were high before it alerted officials

And this is also exactly why the deference of nuclear regulation and oversight from the NRC to the DOD is particularly dangerous. These are political and partial actors who do not have public safety in mind.

www.theguardian.com/us-news/2025...

29.11.2025 18:25 — 👍 6    🔁 0    💬 0    📌 0
Preview
The Death that Keeps on Going How much can one village physically take? The worst-case scenario has already happened countless times in the small West Bank community of Umm al-Khair. It happened when prominent Palestinian…

As the world shifts its gaze away from Palestine, the series stands as documentation of the continued ethnic cleansing and offers testimonies and stories from those facing displacement, homelessness and violence from settlers.

27.11.2025 18:02 — 👍 13    🔁 6    💬 0    📌 0

It’s all happening in NYC! Taking real steps towards a politics of hope that's already diffusing far beyond this city and country: I couldn't be more excited to serve on Mayor-Elect Zohran Mamdani's wide-ranging and hugely inspiring transition team: www.cbsnews.com/newyork/news...

25.11.2025 21:30 — 👍 13    🔁 2    💬 1    📌 0

Despite warnings in our report, today's release of the UK Nuclear Regulatory Review is littered with unsubstantiated claims and recommendations touting AI "as a powerful tool" and "cost-effective" to be used for safety and licensing without noted risks or caveats. This trend has now reached the UK.

24.11.2025 13:16 — 👍 5    🔁 3    💬 1    📌 0

I've said it before and I will say it again. There is no way to secure a system when it's potential attack surface is *all of language*

20.11.2025 17:23 — 👍 59    🔁 16    💬 2    📌 1

Hi Nina, both Sofia and I are actually experts in nuclear safety and work on nuclear power. I recommend reading the report as there is no fear mongering regarding nuclear power.

14.11.2025 19:02 — 👍 2    🔁 0    💬 1    📌 0
Video thumbnail

Tech companies are betting big on nuclear energy to meet AI’s massive power demands—and Trump’s done a lot to make it easier for them. Heidy Khlaaf, the head AI scientist at the AI Now Institute, tells us why that’s dangerous.

@mjgault.bsky.social has the story:
www.404media.co/power-compan...

14.11.2025 18:57 — 👍 131    🔁 69    💬 7    📌 10

Great coverage by @mjgault.bsky.social on our report and what's at stake and what could go wrong in using AI in an attempt to accelerate nuclear development. Read our report here: ainowinstitute.org/publications...

14.11.2025 18:52 — 👍 6    🔁 1    💬 0    📌 0

This fast-tracking approach comes alongside efforts from many of these AI companies themselves to apply unproven AI systems to speed the pace of licensing/regulation. It also forms the core of a new report from the @ainowinstitute.bsky.social @heidykhlaaf.bsky.social

14.11.2025 15:51 — 👍 2    🔁 1    💬 0    📌 0
Preview
A.I. Goes Nuclear! OpenAI, Google, and Microsoft are betting big on nuclear energy to power their A.I. data centers. But weakened regulations may create risks, nuclear safety experts warn.

New: I wrote about the nuclear push coming from an energy-constrained AI industry, something that’s importantly coupled with an increasingly de-regulatory environment — often mirroring the language coming directly from these corporations — coming from the White House.

puck.news/ais-nuclear-...

14.11.2025 15:51 — 👍 2    🔁 2    💬 1    📌 0
Preview
Opinion | You May Already Be Bailing Out the AI Business Washington is treating the industry as if it’s too big to fail, even as the market sends lukewarm signals.

This week Open AI walked back a call for the govt to backstop financing for its trillion dollar investments in data centers. This was only the tip of the iceberg; a slow bailout for AI firms is already underway. Read more from @ambakak.bsky.social and I in @wsj.com: www.wsj.com/opinion/you-...

12.11.2025 22:56 — 👍 163    🔁 75    💬 4    📌 29

Thank you for your kind words! The irony is that they're using the Cold war analogy to roll back the very thresholds established in that period.

12.11.2025 12:08 — 👍 1    🔁 1    💬 1    📌 0

Despite safety and proliferation risks, both AI labs and governments continue to execute these initiatives through the positioning of nuclear infrastructure as an extension of AI infrastructure in service of the purported “AI Arms race”. A risky shortcut with catastrophic consequences.

12.11.2025 11:07 — 👍 3    🔁 0    💬 0    📌 0
Preview
Fission for Algorithms: The Undermining of Nuclear Regulation in Service of AI - AI Now Institute A report examining nuclear “fast-tracking” initiatives on their feasibility and their impact on nuclear safety, security, and safeguards.

New Report: Fission for Algorithms. We draw on our nuclear expertise to dissect the risky fast-tracking initiatives hastening nuclear development in service of AI. This includes proposals to use Gen AI for nuclear licensing, whilst lowering well-established nuclear thresholds.

12.11.2025 11:05 — 👍 20    🔁 8    💬 2    📌 3

"Rafael purchased AI technologies made available through AWS, including the state-of-the-art large language model Claude ... The materials reviewed also indicate Amazon sold cloud-computing services to Israel’s nuclear program and offices administering the West Bank"

25.10.2025 20:24 — 👍 9    🔁 7    💬 0    📌 0
Preview
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work? Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.

Anthropic’s partnership with the DOE to keep Claude from building a nuclear weapon makes for good headlines. @heidykhlaaf.bsky.social calls it security theater. The real risk is AI firms gaining access to national security data.

www.wired.com/story/anthro...

22.10.2025 17:48 — 👍 12    🔁 5    💬 0    📌 1

Yes we've actually engaged with your colleagues prior! Planned on reaching out during paper release and would be happy to include you in the thread.

21.10.2025 15:47 — 👍 2    🔁 0    💬 0    📌 0

We have a paper on this soon and will share once it's out!

21.10.2025 14:49 — 👍 1    🔁 0    💬 1    📌 0

There is an issue with this dichotomy that places this "solution" as sufficient when it's far from that, and ultimately doesn't produce anything of value. The approach of "doing something is better than nothing" distracts from the risks present with AI having access to nuclear secrets, for example.

21.10.2025 10:03 — 👍 1    🔁 0    💬 1    📌 0

I spoke to @mjgault.bsky.social in WIRED on what I ultimately view as safety theatre for "nuclear safeguarding" and how it distracts from the real risk of unregulated private corporations having access incredibly sensitive nuclear secrets given their insecure AI models.

21.10.2025 09:58 — 👍 16    🔁 5    💬 0    📌 0
Preview
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work? Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.

Anthropic says its AI won't help you build a nuclear weapon. Will it work? And can a chatbot even help build a nuke?

20.10.2025 14:04 — 👍 20    🔁 12    💬 3    📌 10
Video thumbnail

OpenAI, Anthropic & others have shifted from championing ethics to signing $200M+ defense contracts that embed gen AI into high-risk military systems. In @theverge.com, @heidykhlaaf.bsky.social explains why the move toward defense partnerships is a safety risk.

Listen here: shorturl.at/mTAZq

30.09.2025 18:19 — 👍 13    🔁 2    💬 1    📌 2
Preview
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies U.S. tech giants have quietly empowered Israel to track and kill many more alleged militants more quickly in Gaza and Lebanon through a sharp spike in artificial intelligence and computing services.

Yes: apnews.com/article/isra...
And
www.theguardian.com/world/2025/j...

26.09.2025 16:30 — 👍 1    🔁 0    💬 1    📌 1

I'm sorry what? We're now back to British colonial officers in British Mandate Palestine who hold “supreme political and legal authority”?

26.09.2025 08:57 — 👍 5    🔁 0    💬 1    📌 0
Preview
How AI safety took a backseat to military money Heidy Khlaff discusses the industry shift toward military applications and what it means for AI safety.

It was great speaking to @haydenfield.bsky.social on the Decoder podcast, where we discussed the relationship between safety and the push for generative AI to be deployed within military applications.
www.theverge.com/podcast/7848...

25.09.2025 16:38 — 👍 14    🔁 7    💬 2    📌 2
Video thumbnail

"The State of Israel has committed genocide."

Navi Pillay, chair of the UN Independent International Commission of Inquiry on the Occupied Palestinian Territory, has told Al Jazeera that Israel’s war on Gaza is a genocide.

16.09.2025 07:57 — 👍 458    🔁 230    💬 18    📌 15

@heidykhlaaf is following 20 prominent accounts