Hayden Field's Avatar

Hayden Field

@haydenfield.bsky.social

Senior AI reporter at The Verge. 5+ years covering the AI industry's power dynamics, societal implications & the arms race at large. Previously: CNBC, Morning Brew, Protocol, etc. Contact me securely on Signal: haydenfield.11

5,663 Followers  |  412 Following  |  227 Posts  |  Joined: 26.04.2023
Posts Following

Posts by Hayden Field (@haydenfield.bsky.social)

Video thumbnail

WTF is happening between Anthropic and the Pentagon?

@theverge.com's @haydenfield.bsky.social joins @eggerdc.bsky.social on Bulwark Takes to chat it over.

02.03.2026 18:29 — 👍 79    🔁 25    💬 4    📌 0

Thank you so much!

02.03.2026 15:14 — 👍 1    🔁 0    💬 0    📌 0
Preview
How OpenAI caved to the Pentagon on AI surveillance The law doesn’t say what Sam Altman claims it does.

Gift link: bit.ly/4slK8Zj

02.03.2026 14:53 — 👍 23    🔁 4    💬 0    📌 1

Across social media and the Al industry, people immediately began to challenge Altman's claim.
Why, they asked, would the Pentagon suddenly agree to the red lines that it had said — in no uncertain terms — that it would never do so?
The answer, sources told The Verge, is that the Pentagon didn't budge. OpenAl agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.
One source familiar with the Pentagon's negotiations with Al companies confirmed that OpenAl's deal is much softer than the one Anthropic was pushing for, thanks largely to three words: "any lawful use." In negotiations, the person said, the Pentagon wouldn't back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAl terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAl's technology to carry it out. And over the past decades, the US government has stretched the definition of
"technically legal" to cover sweeping mass surveillance programs - and more.

Across social media and the Al industry, people immediately began to challenge Altman's claim. Why, they asked, would the Pentagon suddenly agree to the red lines that it had said — in no uncertain terms — that it would never do so? The answer, sources told The Verge, is that the Pentagon didn't budge. OpenAl agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines. One source familiar with the Pentagon's negotiations with Al companies confirmed that OpenAl's deal is much softer than the one Anthropic was pushing for, thanks largely to three words: "any lawful use." In negotiations, the person said, the Pentagon wouldn't back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAl terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAl's technology to carry it out. And over the past decades, the US government has stretched the definition of "technically legal" to cover sweeping mass surveillance programs - and more.

Sam Altman got played and spun it like a win - @haydenfield.bsky.social has the scoop from a weekend’s worth of reporting from inside the Pentagon AI negotiations. www.theverge.com/ai-artificia...

02.03.2026 14:30 — 👍 275    🔁 101    💬 16    📌 5
Preview
How OpenAI caved to the Pentagon on AI surveillance The law doesn’t say what Sam Altman claims it does.

NEW: On Friday night when OpenAI announced its Pentagon deal, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had clearly said it would never budge?

The answer, sources told me, is that it didn't.
www.theverge.com/ai-artificia...

02.03.2026 14:45 — 👍 314    🔁 145    💬 12    📌 14

“When I joined the tech industry, I thought tech was about making people’s lives easier,” an Amazon Web Services employee told The Verge, “but now it seems like it’s all about making it easier to surveil and deport and kill people.”

27.02.2026 17:11 — 👍 4    🔁 2    💬 1    📌 0
Preview
Trump orders federal agencies to drop Anthropic’s AI AI vs. the Pentagon

Anthropic has refused to agree to the Pentagon’s demand to allow ‘any lawful use’ of its AI.

27.02.2026 21:35 — 👍 422    🔁 98    💬 33    📌 13
Preview
We don’t have to have unsupervised killer robots It’s the day of the Pentagon’s looming ultimatum for Anthropic. The situation has left employees at some companies with defense contracts feeling betrayed.

NEW: Amid the Anthropic-Pentagon situation, all week, I've been speaking to employees at OpenAI, Amazon, Microsoft, Google, and more companies, who have expressed similar feelings about the changing moral landscapes internally.

Here's what's going on. (Gift link) www.theverge.com/ai-artificia...

27.02.2026 16:25 — 👍 62    🔁 27    💬 0    📌 5
Preview
Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance The Pentagon’s “threats do not change our position,” Anthropic CEO Dario Amodei wrote.

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

26.02.2026 23:30 — 👍 162    🔁 26    💬 9    📌 5

an interesting journey

25.02.2026 17:53 — 👍 21    🔁 6    💬 4    📌 0
“Alive” is obviously a loaded term; the more frequently used word is “conscious.” If you ask Anthropic if the company thinks Claude is alive, the company will flatly deny it, but stop short of saying the models aren’t conscious.

Kyle Fish, who leads model welfare research at Anthropic, told The Verge, “No, we don’t think Claude is ‘alive’ like humans or any other biological organisms. Asking whether they’re ‘alive’ is not a helpful framing for understanding them, as it typically refers to a fuzzy set of physiological, reproductive, and evolutionary characteristics.” Instead, he believes that “Claude, and other AI models, are a new kind of entity altogether.”

And is that new entity conscious? “Questions about potential internal experience, consciousness, moral status, and welfare are serious ones that we’re investigating as models become more sophisticated and capable, but we remain deeply uncertain about these topics,” he said.

“Alive” is obviously a loaded term; the more frequently used word is “conscious.” If you ask Anthropic if the company thinks Claude is alive, the company will flatly deny it, but stop short of saying the models aren’t conscious. Kyle Fish, who leads model welfare research at Anthropic, told The Verge, “No, we don’t think Claude is ‘alive’ like humans or any other biological organisms. Asking whether they’re ‘alive’ is not a helpful framing for understanding them, as it typically refers to a fuzzy set of physiological, reproductive, and evolutionary characteristics.” Instead, he believes that “Claude, and other AI models, are a new kind of entity altogether.” And is that new entity conscious? “Questions about potential internal experience, consciousness, moral status, and welfare are serious ones that we’re investigating as models become more sophisticated and capable, but we remain deeply uncertain about these topics,” he said.

Sent @haydenfield.bsky.social to straight-up ask if Anthropic thinks Claude is alive. The answer is essentially a company inviting you to have a metaphysical crisis about whether “alive” requires flesh and blood www.theverge.com/report/88376...

25.02.2026 14:49 — 👍 89    🔁 11    💬 5    📌 9
Preview
Does Anthropic think Claude is alive? Define ‘alive’ “We don’t know if the models are conscious,” Anthropic CEO Dario Amodei said on a podcast earlier this month.

Anthropic is "reinforcing ideas that have caused real harm, including some deaths by suicide among people who believe that the chatbot they’re speaking with exhibits some form of consciousness or deep empathy." - @haydenfield.bsky.social

25.02.2026 14:43 — 👍 33    🔁 7    💬 1    📌 1

OpenAI has a new chief people officer. Arvind KC, who was formerly Roblox’s chief people & systems officer, has also held senior roles at Google, Palantir & Meta, per OpenAI.

He replaces the company’s former chief people officer, who departed in Aug '25 after <6mo in the role.

24.02.2026 21:41 — 👍 4    🔁 0    💬 0    📌 0

Update: From today's mtg between Hegseth and Amodei, Axios reported an ultimatum of Friday evening: Pentagon "will either cut ties and declare Anthropic a 'supply chain risk,' or invoke the Defense Production Act to force the company to tailor its model to the military's needs."

24.02.2026 19:37 — 👍 11    🔁 3    💬 0    📌 2
Preview
Inside Anthropic’s existential negotiations with the Pentagon Remember Emil Michael?

By me + @tinanguyen.bsky.social. Gift link: www.theverge.com/ai-artificia...

24.02.2026 14:13 — 👍 20    🔁 10    💬 2    📌 2
Post image

Today, Anthropic CEO Dario Amodei meets with Pete Hegseth at the White House, amid intensifying negotiations over terms that would give the military carte blanche to use Anthropic's AI for lethal autonomous weapons with no human involvement. Heres a look inside those talks.

24.02.2026 14:12 — 👍 86    🔁 42    💬 13    📌 26

URL shortener did me dirty and it says the valid gift link is too long to post lol. DM me if you want it!

20.02.2026 01:34 — 👍 3    🔁 0    💬 0    📌 0
Preview
The nine people trying to stop AI from ruining the world Spoiler: the nine-person team works for Anthropic.

Ugh, URL shortener did me dirty. Here's a valid gift link: www.theverge.com/ai-artificia...

20.02.2026 01:31 — 👍 4    🔁 2    💬 1    📌 0
Preview
The nine people trying to stop AI from ruining the world Spoiler: the nine-person team works for Anthropic.

Today, Anthropic's Jack Clark announced that the company plans to majorly scale up its Societal Impacts team, making it a "load-bearing team for informing decisions Anthropic makes."

I profiled that nine-person team in December. Read it here (gift link):
url-shortener.me/DJT8

19.02.2026 23:29 — 👍 48    🔁 11    💬 3    📌 1
Preview
What’s behind the mass exodus at xAI? Departures during a restructuring are normal — but internal tensions might play a role as well.

Former xAI employees told us that this week's restructuring followed tensions over safety and being "stuck in the catch-up phase."
www.theverge.com/ai-artificia...

13.02.2026 18:40 — 👍 36    🔁 10    💬 5    📌 2
Verge headline: ‘Shut up and focus on the mission’: Tech workers are frustrated by their companies’ silence about ICE
by Hayden Field

Photo illustration depicts a person in a window on a PC, typing at a laptop. Looming in the background are ICE law enforcement agents

Verge headline: ‘Shut up and focus on the mission’: Tech workers are frustrated by their companies’ silence about ICE by Hayden Field Photo illustration depicts a person in a window on a PC, typing at a laptop. Looming in the background are ICE law enforcement agents

“The dissent I’ve seen is like a whisper. It’s a fear-based culture right now.”

Across the industry, workers describe a ‘fear-based culture’ and pressure to ‘fall in line.‘

Read more from @haydenfield.bsky.social: buff.ly/NArp66x

11.02.2026 17:36 — 👍 2170    🔁 727    💬 114    📌 34

Had a blast returning to Vox Media's Today Explained for today's show, talking about AI scheming & the OpenClaw/Moltbook of it all. Check it out wherever you get your podcasts! www.vox.com/today-explai...

11.02.2026 16:25 — 👍 7    🔁 0    💬 0    📌 0

Would love to hear more about this / what the companies were? I'm on Signal at haydenfield.11

11.02.2026 16:24 — 👍 0    🔁 0    💬 0    📌 0
Preview
‘Shut up and focus on the mission’: Tech workers are frustrated by their companies’ silence about ICE Across the industry, workers describe a “fear-based culture” and pressure to “fall in line.”

gift link here: tinyurl.com/3w4cr64n

11.02.2026 14:12 — 👍 5    🔁 3    💬 0    📌 1
Verge headline: Google's healthcare AI made up a body part -- what happens wh en doctors don't notice?
by Hayden Field

Illustration depicts a human brain wearing a stethoscope

Verge headline: Google's healthcare AI made up a body part -- what happens wh en doctors don't notice? by Hayden Field Illustration depicts a human brain wearing a stethoscope

From August 2025: Imagine a radiologist is using a cutting-edge AI tool to analyze your brain scan. The scan flags a problem in your "basilar ganglia." The problem is, there's no such thing.

Read more from @haydenfield.bsky.social: www.theverge.com/health/71804...

09.02.2026 21:31 — 👍 404    🔁 160    💬 31    📌 21
Preview
‘Shut up and focus on the mission’: Tech workers are frustrated by their companies’ silence about ICE Across the industry, workers describe a “fear-based culture” and pressure to “fall in line.”

Amid an immigration crackdown, tech workers across the industry describe a culture of silence & fear—and trepidation over the type of future they’re helping build. Many also described an eerie lack of acknowledgment in meetings & quiet internal resistance. www.theverge.com/ai-artificia...

11.02.2026 14:01 — 👍 54    🔁 19    💬 3    📌 4
Preview
ICE to begin detaining immigrants inside Social Circle warehouse in April Homeland Security plans to build warehouse detention facilities in other cities being met with opposition.

ICE has now spent over half a BILLION dollars just on purchasing warehouses around the country to convert into detention camps.

If these mega-camps are utilized to the full capacity ICE intends, they'll be the largest prisons in the country, with little real oversight. www.ajc.com/politics/202...

09.02.2026 17:56 — 👍 12866    🔁 7482    💬 1288    📌 1241

This post appeared under this Techmeme headline:

06.02.2026 01:57 — 👍 0    🔁 1    💬 0    📌 0
Preview
Claude has been having a moment — can it keep it up? “Now you’re just like, ‘Here’s the magic castle. Build it.’ And it gets done.”

Boris Cherny often gets recognized in public. At the bar, at the airport, in generally any public space, people want to take selfies with Claude Code's creator.

Since the holidays, Claude has gone viral across sectors. Can Anthropic prolong the hype with Opus 4.6? www.theverge.com/report/87430...

05.02.2026 18:31 — 👍 24    🔁 3    💬 0    📌 1
Preview
Humans are infiltrating the Reddit for AI bots Things on the AI agent social media platform got even weirder over the weekend.

Ordinary social networks face a constant onslaught of chatbots pretending to be human. A new social platform for AI agents, called Moltbook and designed to look a lot like Reddit, may face the opposite problem: getting clogged up by humans pretending to post as bots.
www.theverge.com/ai-artificia...

03.02.2026 14:18 — 👍 30    🔁 11    💬 1    📌 3