Both things can be true:
The Pentagon’s blacklisting of Anthropic is an abuse of executive power.
The daylight between the Pentagon and Anthropic is less than you think.
www.nytimes.com/2026/03/01/t...
All this talk of advanced AI targeting and the U.S. still fired a Tomahawk at an elementary school where an outdoor playground has been visible on Google Maps since 2017 www.washingtonpost.com/national-sec...
20/ This begins with restrictions on autonomous weapons, privacy protections, re-investment in the Pentagon’s testing and oversight capacities, and checks on industry’s influence.
None of this will be easy, but our report suggests concrete steps lawmakers should take towards meaningful regulation.
19/ We can’t just trust that the military will eventually course correct or the next administration will do the right thing.
Congress has a constitutional duty to regulate the military and establish durable rules and oversight.
18/ The military is also buying up data containing detailed location, financial and web browsing records of Americans, undermining their Fourth Amendment rights. Congress has not meaningfully restricted these purchases, let alone the use of AI to extract even more sensitive insights from this data.
17/ Congress is also absent.
It has funded the military’s AI boom, yet seems barely alert to its harms.
Deploying AI weapons has life-and-death consequences and should be subject to democratic control.
But aside from requiring some transparency, lawmakers have not enacted meaningful red lines.
16/ This rollback of internal safeguards saps the military of the capacity and know-how to push back against flawed design choices by tech firms.
The army, for example, has struggled with “black box” systems that introduce security vulnerabilities it cannot fully assess or control.
15/ As the military races to adopt AI, it has not meaningfully grappled with these risks.
Instead, it is slashing regulatory capacity and expertise - gutting its main office overseeing weapons testing, for example, and shuttering civilian protection efforts.
14/ @ainowinstitute.bsky.social has found that a handful of tech firms control access to the building blocks of AI.
That gives them enormous leverage to shape the military’s reliance on the technology and charge higher prices. It also creates single points of failure.
13/ While a lot of focus has been on how the military *uses* AI, it’s equally important to scrutinize the infrastructure that keeps it going.
One of the Pentagon’s biggest tech expenses is its $9B contract with Google, Oracle, AWS and MSFT for cloud computing, which keeps the military’s AI online.
12/ Barely a year later, Claude is playing a central role in the Iran strikes.
The model is so deeply embedded in the military’s systems that DOD sources say it will be very difficult to unwind.
11/ Frontier AI companies are just beginning to compete for defense contracts, but they are already having enormous impact.
In July 2025, DOD signed agreements with Anthropic, OpenAI, Google, and xAI to develop military applications of their models.
10/ But the Pentagon has been facing pressure to keep up with rapid advances in small drone warfare, which have altered the course of the Ukraine war.
Anduril has become a leading supplier of AI-powered drones and counter-drone technology.
www.wsj.com/politics/nat...
9/ The reality is that even semi-autonomous weapons are fraught with risk.
Even with human input, they can still mistake civilians for targets, fire on friendly forces, and desensitize operators to the human costs of strikes.
8/ AI is also accelerating the development of autonomous weapons. Anthropic's dispute with DOD has focused on weapons that ID and fire on targets without human input.
These weapons can be so unpredictable and indiscriminate that @icrc.org has urged states against using them to target human beings.
7/ In military situations, these mistakes can be deadly. Human oversight breaks down more frequently in the heat of war.
The clearest example of this is Gaza: IDF analysts, under immense pressure to approve AI-recommended targets for strikes, failed to sufficiently corroborate them.
6/ MSS also integrates Anthropic’s AI model, Claude, which the military is using to identify and locate targets, summarize intelligence, and plan battlefield strategy.
But AI models can “hallucinate” - generating false or misleading analysis while making it sound convincing.
5/ MSS stems from a decade of collaboration between DOD and tech firms to deploy AI in intelligence analysis, surveillance and targeting.
But some of its algorithms aren't very accurate: in 2024, they could identify a tank only ~60% of the time in good weather and 30% in bad. (Soldier ID was 84%.)
4/ Palantir is lead contractor on the Maven Smart System (MSS), which the US is using to identify and locate targets in Iran.
Palantir’s defense revenue is growing faster than most comparable contractors.
This is not just from MSS but also its role in building the military's data backbone.
3/ While the media focus has been on Anthropic, much of AI-related defense spending has gone to Palantir, the data analytics giant, and drone manufacturer Anduril.
Both companies - which each approached $1B in defense revenue in 2025 - have seen record growth.
2/ Around $75 billion has been allocated to the AI-driven programs we reviewed, but the true amount could be far larger. As the Pentagon pushes an “AI-first” approach, this number will almost certainly grow.
1/ As the US continues AI-enabled strikes on Iran, new @brennancenter.org research from @emileayoub.bsky.social and me examines how the military’s investments in the technology have been building to this moment.
Congress must step up and reckon with AI’s dangers to life, liberty, and democracy 🧵:
Emile Ayoub and Amos Toh: Senior Counsels, Liberty and National Security Program, Brennan Center for Justice ( @emileayoub.bsky.social and @amostoh.bsky.social):
In this Tech Policy piece, I criticize how framings of Anthropic’s & OpenAI’s negotiations with the US’s DoW overindex on myopic interpretations of human oversight, papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology.
The expanding war in Iran brought to the fore questions about the role of technology in armed conflict, including the controversial use of new artificial intelligence technologies. Tech Policy Press invited perspectives from experts on what they are watching for as the situation unfolds.
As @amostoh.bsky.social and I explain, the Pentagon's dispute with Anthropic should not distract from the broader crisis at hand: Congress's failure to regulate some of the riskiest uses of AI - namely, amplifying surveillance and automating the use of lethal force.
Important 🧵on how the military’s use of Claude and other agentic AI could blow open the data broker loophole.
@emileayoub.bsky.social, @jlkoepke.bsky.social and @jakelaperruque.bsky.social are all experts on this - follow them!
Trump's "massive and ongoing operation" against Iran is a clear violation of the Constitution's separation of powers.
In our democracy, the president does not have the kingly power to plunge the nation into war without democratic debate and sanction. /1 www.brennancenter.org/our-work/ana...
this is really good background and context
new details about the Pentagon's fight with Anthropic are coming in fast and still under dispute, but this excellent analysis from @rightsduff.bsky.social and @amostoh.bsky.social from friday is very much worth your time. bsky.app/profile/just...