I wrote a piece about what I learned from my editors. And how those lessons are indispensable in the AI economy.
! “He told another colleague, who refused to help him upload the data because of legal concerns, that he expected to receive a presidential pardon if his actions were deemed to be illegal, according to the complaint.” Pardons don’t cover lawsuits tho
A not true but very very true observation on public opinion in the modern era
John Adams: “Great is the guilt of an unnecessary war.”
James Carville: this, but 11 minutes long with many many many many many many many bad words
This is one of the best conversations I've heard about AI -- and I've spent most waking hours these last 6 months reading and listening to people talk about it
Ezra Klein interviews Dean W Ball about the implications of Trump v Anthropic. & policy dilemmas ahead
podcasts.apple.com/fi/podcast/w...
‘A.I. Brain Fry’: A second research study in just a couple of weeks finds that hyper-productive workers using AI in novel ways risk mental overload. Taking on new tasks, too many tasks. Study says managers can make a difference — in how they set A.I. policy. hbr.org/2026/03/when...
Summary: 1) Anthropic still negotiating with Pentagon 2) Supply chain risk designation has arrived via letter; it’s narrowly written, affects minimal slice of business 3) Apologizes for angry memo — says he didn’t leak it; was written amid anger last Friday www.anthropic.com/news/where-s...
Very disturbing implications for a couple of places where I’ve lived, and encouraging ones for the place where I’m from.
From Pew: “We asked people around the world to rate the morality and ethics of others in their country.”
Hm. He mentions donations too. So I have some company in wondering whether grubby self-dealing is driving a radical shift in U.S. national-security policy involving a vital technology —>
The lawsuits against Facebook over these glasses are going to involve amounts that could build hotels on Mars
Update: Ah, FFS
Meanwhile: There's a hearing today in the US Congress on AI safety. Previously scheduled, but unfolding amid the Pentagon-Claude-OpenAI clusterschmozzle.
Trump's ex-AI advisor sees the forest for the trees: Claude v Pentagon isn't just about one company getting squeezed; it's another milestone in democratic decline. Using state levers to crush disfavored companies = Authoritarianism 101. OpenAI participated in this. No redactions can undo that
It’s one thing to disagree on terms of a contract. Or to cancel said contract.
It’s another thing to set out to completely crush a company over different standards on mass-surveillance & autonomous weapons. And move to benefit a company with closer financial ties to the administration. That’s new
I wrote a blog post on the appropriate response to blatant authoritarian over-reach like this —>
Here are three steps to consider in switching away from ChatGPT. As surveillance-weapons deal with Trump admin sends record stampede toward Claude. alexpanetta.substack.com/publish/post...
The market reaction to the U.S. military threatening to destroy Claude: boosting it to No. 1 on the Apple Store.
/ Amazingly this is now the No. 2 story of the day but here’s a rundown of the Pentagon-Anthropic situation with links to a bunch of sources and the chatboards in Silicon Valley.
dailyaidigest.net
Me greeting people in my neighborhood, located between the embassies of Iran, Israel and the U.S. ambassador’s residence: “Buongiorno! English? No, scusa. Sono italiano!”
/ What a situation, in any case. A lead author of Donald Trump’s AI policy, just a few months ago, is now telling AI companies: “Don’t start in the U.S. Start in Canada, the UK, Australia, UAE…”
/ Who knows. Maybe it’s unrelated to all of this.
/ An alternative theory of what’s happening: Claude has been catching up to OpenAI in corporate use, as a far superior tool. And a historically corrupt U.S. administration that has major investment ties to OpenAI found a way to unbalance the playing field.
Thoughts and prayers for US intelligence analysts stuck using ChatGPT, instead of Claude
Trump Orders Government to Stop Using Anthropic After Pentagon Standoff www.nytimes.com/2026/02/27/u...
One thing we've been talking about at school this week is building your ethical code in at the root, not trying to staple it on at the end.
Whether or not Anthropic is fighting a just cause now, and whether or not it intends to stick with it, it's worth asking what they expected in 11/2024 + 7/2025
1. Announcing an intelligence partnership with Palantir *two days* after the last presidential election: investors.palantir.com/news-details...
2. Signing a Department of Defence contract weeks after the president began deploying the National Guard in US cities: www.anthropic.com/news/anthrop...
Now that this Anthropic-Pentagon story has blown up here’s something CEO Dario Amodei might wind up getting pushed on. If you are in fact morally offended by autonomous weapons and mass-surveillance to the point of referring to them as crimes against humanity might you have considered not:
/
2/2 From I, Me, Mine, to crime and the economy — what LLMs find.... and what they don't
More context here: substack.com/home/post/p-...
From FDR to Trump: What turns up when you feed Claude Code 90 years of State of the Union speeches.
Some patterns surprised me, some didn't. 1/
In a normal news cycle in a normal year, Anthropic vs the Pentagon would be the story of the year. The military threatening a top A.I. lab over a defining question of our century’s technology — its use in autonomous weapons & mass surveillance. Bigger than post-Olympic hockey micro controversies!
/ man they’re gonna wind up making another movie about this shit