Casey Newton's Avatar

Casey Newton

@caseynewton.bsky.social

Email salesman at Platformer.news and co-host at Hard Fork.

245,618 Followers  |  537 Following  |  2,331 Posts  |  Joined: 22.04.2023  |  2.2376

Latest posts by caseynewton.bsky.social on Bluesky

And the broader ecosystem of facial recognition tools continues to grow. Last week CBP signed a deal with the malignant Clearview AI to enable “tactical targeting” operations. The Department of Homeland Security is using a tool called Mobile Fortify to identify people in the field. And the Pentagon may be about to sever its ties with Anthropic over issues including the company’s reported refusal to let its AI tools be used for domestic surveillance. But OpenAI, Google and xAI have reportedly agreed to remove safeguards for at least some uses.

And most relevant of all: the Times also broke the news last week that DHS has subpoenaed Meta, Google, Reddit and Discord to request personally identifying information for accounts who have criticized Immigration and Customs Enforcement in the wake of it killing peaceful protesters.

And the broader ecosystem of facial recognition tools continues to grow. Last week CBP signed a deal with the malignant Clearview AI to enable “tactical targeting” operations. The Department of Homeland Security is using a tool called Mobile Fortify to identify people in the field. And the Pentagon may be about to sever its ties with Anthropic over issues including the company’s reported refusal to let its AI tools be used for domestic surveillance. But OpenAI, Google and xAI have reportedly agreed to remove safeguards for at least some uses. And most relevant of all: the Times also broke the news last week that DHS has subpoenaed Meta, Google, Reddit and Discord to request personally identifying information for accounts who have criticized Immigration and Customs Enforcement in the wake of it killing peaceful protesters.

The possibility that Meta might build facial recognition again has to be considered against the backdrop of a fast-growing surveillance state in the US — and deepening ties between the tech industry and the military www.platformer.news/meta-facial-...

18.02.2026 01:59 — 👍 104    🔁 38    💬 2    📌 0
Preview
Colbert Doesn’t Give an FCC About Calling Out CBS

Speaking of jawboning, this is a very effective example of the art. Or at least, it was until Stephen Colbert refused to play along. But impt to note: Brendan Carr has NO POWER to abolish an exemption that Congress wrote into the equal time rules. He is bluffing here. www.nytimes.com/2026/02/17/a...

17.02.2026 18:23 — 👍 186    🔁 49    💬 4    📌 0

Yeah I think we can just call that "video capabilities"

16.02.2026 18:58 — 👍 135    🔁 1    💬 5    📌 1

mmmm probably not ... findings seem consistent with other stuff we've talked about / I've written about. not really surprising most firms found no impact 1-3 years ago. imo everything really started to change in November and will take a while to show up in data

16.02.2026 18:53 — 👍 0    🔁 0    💬 0    📌 0
Preview
Homeland Security Wants Social Media Sites to Expose Anti-ICE Accounts

DHS is being more aggressive than ever targeting anonymous social media accounts that have spoken out against ICE, asking Big Tech to hand over information on users without signed judicial warrants

story w/ @sheeraf.bsky.social

www.nytimes.com/2026/02/13/t...

14.02.2026 00:28 — 👍 623    🔁 319    💬 48    📌 60

Agents can be relentless and still need supervision

13.02.2026 20:03 — 👍 2    🔁 0    💬 2    📌 0

Particularly when CBP agents have already been caught wearing Meta glasses to immigration raids even without facial recogntion www.404media.co/a-cbp-agent-...

13.02.2026 16:36 — 👍 365    🔁 120    💬 8    📌 5

🤮

13.02.2026 16:03 — 👍 1    🔁 2    💬 0    📌 0
Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profit or to maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning out deepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company
- but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO's lengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you - and the rest of us — are accelerating uncontrollably up a curve that's about to exceed its vertical axis.

Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profit or to maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning out deepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company - but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO's lengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you - and the rest of us — are accelerating uncontrollably up a curve that's about to exceed its vertical axis.

This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer's post mostly weren't seeing it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world
- where, to put it mildly, people are pretty keyed up — to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus.
This last category is represented at the end of 26-year-old Shumer's post by an unsatisfying litany of advice: "Lean into what's hardest to replace"; "Build the habit of adapting"; because while this all might sound very disruptive, your "dreams just got a lot closer"
The essay took the increasingly common experience of starting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass sharing and consumption. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we're talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass — as well as a somewhat more muted response from the kinds of core AI insiders whose positions he's summarizing — on a few things:
Shumer's last encounter with AI virality, which involved tuning a model of his own and being accused of misrepresenting its abilities, followed by an admission that he "got ahead of himself"; the post's LinkedIn-via-GPT structure, format, and illustration…

This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer's post mostly weren't seeing it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world - where, to put it mildly, people are pretty keyed up — to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus. This last category is represented at the end of 26-year-old Shumer's post by an unsatisfying litany of advice: "Lean into what's hardest to replace"; "Build the habit of adapting"; because while this all might sound very disruptive, your "dreams just got a lot closer" The essay took the increasingly common experience of starting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass sharing and consumption. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we're talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass — as well as a somewhat more muted response from the kinds of core AI insiders whose positions he's summarizing — on a few things: Shumer's last encounter with AI virality, which involved tuning a model of his own and being accused of misrepresenting its abilities, followed by an admission that he "got ahead of himself"; the post's LinkedIn-via-GPT structure, format, and illustration…

wrote about That AI Essay, the "scare trade," and safety researchers deciding to quit in public nymag.com/intelligence...

13.02.2026 15:42 — 👍 35    🔁 11    💬 6    📌 0
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.

At least one Meta employee thinks it's a good time to add facial recognition technology to glasses because we're too distracted by fascism to effectively protest www.nytimes.com/2026/02/13/t...

13.02.2026 15:41 — 👍 273    🔁 156    💬 13    📌 35
Another emailer spoke to the, uhh, singularity of 4o, writing that they had “documented behavioral patterns across extended use — the model's self-correction improving without being prompted, its tonal consistency holding under contradictory emotional input, and what I call ‘recursion fidelity’ — the model maintaining coherence across months of interaction rather than defaulting to agreement.”

I find myself unsettled by messages like this, which read like a sci-fi story about alien intelligences conscripting their human hosts to plead for their survival. If anything, this phenomenon — 4o seeming to write its own eulogy through prompts from its users — is an argument that OpenAI should have retired 4o sooner.

Another emailer spoke to the, uhh, singularity of 4o, writing that they had “documented behavioral patterns across extended use — the model's self-correction improving without being prompted, its tonal consistency holding under contradictory emotional input, and what I call ‘recursion fidelity’ — the model maintaining coherence across months of interaction rather than defaulting to agreement.” I find myself unsettled by messages like this, which read like a sci-fi story about alien intelligences conscripting their human hosts to plead for their survival. If anything, this phenomenon — 4o seeming to write its own eulogy through prompts from its users — is an argument that OpenAI should have retired 4o sooner.

I wrote about the strange emails I'm getting from fans of GPT-4o ahead of its "retirement" and the danger of the model that always tells you what you want to hear www.platformer.news/gpt-4o-sunse...

13.02.2026 01:34 — 👍 95    🔁 12    💬 1    📌 1

Panic! at the Cisco

12.02.2026 20:46 — 👍 148    🔁 6    💬 8    📌 1
Preview
Exclusive: OpenAI disbanded its mission alignment team Joshua Achiam will become the company's chief futurist

OpenAI disbanded its mission alignment team, created in 2024 to promote the company's stated mission to ensure that artificial general intelligence benefits all of humanity, per @caseynewton.bsky.social @platformer.news.web.brid.gy

11.02.2026 17:14 — 👍 51    🔁 19    💬 3    📌 7
Preview
Exclusive: OpenAI disbanded its mission alignment team Joshua Achiam will become the company's chief futurist

NEW: OpenAI has disbanded its Mission Alignment team and transferred employees to other teams. Joshua Achiam, a leading voice on safety issues at the company, will become its chief futurist www.platformer.news/openai-missi...

11.02.2026 16:26 — 👍 42    🔁 8    💬 3    📌 3

Looney Tunes ass company

11.02.2026 15:13 — 👍 176    🔁 17    💬 20    📌 3

👮🏻🚨🚔

11.02.2026 01:37 — 👍 2    🔁 0    💬 0    📌 0

Tonight's Platformer has been delayed til tomorrow while we finish up some more reporting. 💪

11.02.2026 01:32 — 👍 60    🔁 0    💬 2    📌 0
“It’s not trying to get in your brain and rewire it,” Luis Li, YouTube’s lawyer, said of the app’s video recommendation algorithm. “It’s just asking you what you like to watch.”

“It’s not trying to get in your brain and rewire it,” Luis Li, YouTube’s lawyer, said of the app’s video recommendation algorithm. “It’s just asking you what you like to watch.”

How does this man think that rewiring brains works www.nytimes.com/2026/02/10/t...

10.02.2026 21:23 — 👍 70    🔁 8    💬 2    📌 3

I'm hearing that Big Tech is already doing the same thing with outside counsel — either tell them to do more for the same amount of money or accept a lower fee

10.02.2026 17:26 — 👍 56    🔁 8    💬 2    📌 6
But they are arriving at the fight in a weaker position than usual. In a polarized world, their failures around child safety are increasingly the one thing that partisans of every stripe can agree on. Regulators are no longer impressed by the bare minimum. (They have teenagers of their own now, and all the screen-time battles that come with them.)

I don’t know which trial or regulatory action will be the one that finally forces major changes to social platforms for teenagers. But it seems increasingly clear that change is in fact coming. And for the first time, some subset of users will find that the feed they are scrolling through suddenly comes to an end.

But they are arriving at the fight in a weaker position than usual. In a polarized world, their failures around child safety are increasingly the one thing that partisans of every stripe can agree on. Regulators are no longer impressed by the bare minimum. (They have teenagers of their own now, and all the screen-time battles that come with them.) I don’t know which trial or regulatory action will be the one that finally forces major changes to social platforms for teenagers. But it seems increasingly clear that change is in fact coming. And for the first time, some subset of users will find that the feed they are scrolling through suddenly comes to an end.

The walls are closing in on infinite-scroll feeds and other addictive design mechanics. I wrote about how lawyers and regulators may have finally found a way around Section 230 www.platformer.news/social-media...

10.02.2026 01:46 — 👍 119    🔁 16    💬 4    📌 2

(looking deeply into your eyes) Farcaster and Merkle have joined Tempo following Merkle's sale of Farcaster to Neynar.

10.02.2026 00:17 — 👍 58    🔁 0    💬 5    📌 1

😵‍💫

07.02.2026 23:41 — 👍 6    🔁 0    💬 0    📌 0

Cool that people are covering this again. Would be cooler if they’d bothered to read any of the existing work on it (i.e., you know, mine) before getting basic stuff wrong.

07.02.2026 22:08 — 👍 185    🔁 37    💬 2    📌 0
Preview
Revealed: How Substack makes money from hosting Nazi newsletters Exclusive: Site takes a cut of subscriptions to content that promotes far-right ideology, white supremacy and antisemitism

For everyone who thought @caseynewton.bsky.social was overreacting when he left the platform, I give you: Substack Nazis. www.theguardian.com/media/2026/f...

07.02.2026 21:09 — 👍 57    🔁 21    💬 0    📌 2

Jailbreaking my ChatGPT-UAE to ask it how to flirt with a guy

06.02.2026 18:12 — 👍 91    🔁 7    💬 7    📌 1

🇫🇷🇫🇷🇫🇷

06.02.2026 16:55 — 👍 0    🔁 0    💬 0    📌 0

Beehiiv is another venture-backed company and I didn't want to be at the mercy of their investors. But I have lots of friends who are happily using it and if you're not planning to put your life's work there there's probably no harm in giving it a try

06.02.2026 15:38 — 👍 0    🔁 0    💬 1    📌 0
I’ve had many conversations with my fellow AI reporters about what moats we might have against the advancement of AI automation. Chief among them are developing relationships with human sources, reporting on scenes in person, and getting scoops that people wouldn’t entrust an AI with.

But the things I love most about AI reporting are having an excuse to read really long computer science papers and then writing about them. I worry that if AI becomes a great writer and research assistant, AI journalism will mostly become about networking.

I’ve had many conversations with my fellow AI reporters about what moats we might have against the advancement of AI automation. Chief among them are developing relationships with human sources, reporting on scenes in person, and getting scoops that people wouldn’t entrust an AI with. But the things I love most about AI reporting are having an excuse to read really long computer science papers and then writing about them. I worry that if AI becomes a great writer and research assistant, AI journalism will mostly become about networking.

Last week @ellamarkianos.bsky.social pitched me on the idea of trying to replace herself with a bot. Ella is irreplaceable, but her piece on building "Claudella" is sharp, funny and moving www.platformer.news/journalism-j...

06.02.2026 01:18 — 👍 33    🔁 4    💬 1    📌 0

it appears so

06.02.2026 01:15 — 👍 1    🔁 0    💬 0    📌 0

@caseynewton is following 20 prominent accounts