John Herrman's Avatar

John Herrman

@jwherrman.bsky.social

posting about posts at new york magazine. have me on your podcast!

8,764 Followers  |  619 Following  |  728 Posts  |  Joined: 28.04.2023
Posts Following

Posts by John Herrman (@jwherrman.bsky.social)

a sharper way to make this argument would have been to simply point out: Anthropic is building in a world where Pete Hegseth β€”Β a genuine and obviously incompetent maniac β€” has meaningful power over it

02.03.2026 14:22 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

q: hey why is it ok you scraped everyone's content to build your product
a: welllllll
q: hey, why is it different when companies scrape the proprietary data you distilled from ours
a: national security, next question

26.02.2026 14:11 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Anthropic is a ripe target here as far as jokes about hypocrisy are concerned: It's pitched as the conscientious AI lab, but also it settled last year to pay out $1.5 billion to authors whose pirated books it used for training. These posts represent a fair critique of all of the big players, which have ingested enormous quantities of material created by others, often without permission, to build proprietary models over which they now claim something like authorship. The scrapers have become the scraped, their own powerful distillations of the world's information sampled, reconstituted, and distilled once more.

Anthropic is a ripe target here as far as jokes about hypocrisy are concerned: It's pitched as the conscientious AI lab, but also it settled last year to pay out $1.5 billion to authors whose pirated books it used for training. These posts represent a fair critique of all of the big players, which have ingested enormous quantities of material created by others, often without permission, to build proprietary models over which they now claim something like authorship. The scrapers have become the scraped, their own powerful distillations of the world's information sampled, reconstituted, and distilled once more.

The backlash here isn't just about that irony. Anthropic is, at the moment, the Al lab to beat and the company whose products are most responsible for recent speculation about how AI might blow up the economy.
As a result, mockery wasn't coming just from people whose content had been scraped by Anthropic or who generally object to the way LLM models are trained. It was coming from AI insiders who see big firms as pulling the ladder up or trying to fortify their early dominance with the help of regulators, copyright law, and government funding. Within the story of an international arms race, model distillation can be cast as a threat to national security and American economic competitiveness. Within some of the other stories about AI, it might look more like fear of competition in general: of cheaper models; of free, open-source models; and of the rapid commoditization of capabilities that, just a few months prior, were unique and prohibitively expensive to develop. The Al firms called out by Anthropic - DeepSeek, Moonshot, and MiniMax β€” make models that are open to use not just in China but in the U.S. and elsewhere and that are already competing for some of the same customers.
Moonshot's latest Kimi models seem to perform, for many functions, about as well as the best American models did in the middle of last year. DeepSeek, the arrival of which briefly sent the AI industry and the stock market into chaos, is expected to release a major model update imminently, which may help explain why the big labs are all speaking up at the same time.

The backlash here isn't just about that irony. Anthropic is, at the moment, the Al lab to beat and the company whose products are most responsible for recent speculation about how AI might blow up the economy. As a result, mockery wasn't coming just from people whose content had been scraped by Anthropic or who generally object to the way LLM models are trained. It was coming from AI insiders who see big firms as pulling the ladder up or trying to fortify their early dominance with the help of regulators, copyright law, and government funding. Within the story of an international arms race, model distillation can be cast as a threat to national security and American economic competitiveness. Within some of the other stories about AI, it might look more like fear of competition in general: of cheaper models; of free, open-source models; and of the rapid commoditization of capabilities that, just a few months prior, were unique and prohibitively expensive to develop. The Al firms called out by Anthropic - DeepSeek, Moonshot, and MiniMax β€” make models that are open to use not just in China but in the U.S. and elsewhere and that are already competing for some of the same customers. Moonshot's latest Kimi models seem to perform, for many functions, about as well as the best American models did in the middle of last year. DeepSeek, the arrival of which briefly sent the AI industry and the stock market into chaos, is expected to release a major model update imminently, which may help explain why the big labs are all speaking up at the same time.

it was interesting to see a bunch of AI people suddenly start making BlueSky jokes about model scraping at Anthropic's expense β€”Β it's almost as if the LLM theft critique is grounded in something real and significant! nymag.com/intelligence...

26.02.2026 13:53 β€” πŸ‘ 39    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1

This was, if I say so myself, a banger.

25.02.2026 17:55 β€” πŸ‘ 50    πŸ” 12    πŸ’¬ 2    πŸ“Œ 3

On this, really interesting piece on X’s β€œideological ratchet” by @jwherrman.bsky.social: the platform is pulling users to the right, even elites who thought they were immune to radicalisation
nymag.com/intelligence...

23.02.2026 09:35 β€” πŸ‘ 66    πŸ” 26    πŸ’¬ 1    πŸ“Œ 3

hawking this link again. elite social media radicalization is vastly underemphasized compared to mass "misinformation," etc nymag.com/intelligence...

19.02.2026 16:47 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

the way the answer does in fact equivocate is funny, but that's not really the point β€” it's been tuned to respond to a slightly different question with the answer, "anyway, doesn't matter, might makes right"

18.02.2026 17:33 β€” πŸ‘ 58    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

the character it's playing is certainly closer to "tutor" or "librarian" (vs "contemptuous teenage debater")

18.02.2026 17:22 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

from what I've heard about xAI they basically red-team for "wokeness" but in a haphazard way that frequently involves forwarded tweets from the boss

18.02.2026 17:19 β€” πŸ‘ 35    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
In that sense, Grokipedia β€” like X and Grok β€” is also a warning.
Sure, it's part of an excruciatingly public example of one man's gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk's desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the "Wikipedia rewritten to be more conservative by Elon Musk's anti-PC chatbot" scenario in the run-up to, say, his purchase of Twitter. It would have sounded insane, and you would have too.) But what Musk can build for himself now is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they're "maximally truth-seeking" or "objective" as they simply tell us what we want to hear.

In that sense, Grokipedia β€” like X and Grok β€” is also a warning. Sure, it's part of an excruciatingly public example of one man's gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk's desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the "Wikipedia rewritten to be more conservative by Elon Musk's anti-PC chatbot" scenario in the run-up to, say, his purchase of Twitter. It would have sounded insane, and you would have too.) But what Musk can build for himself now is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they're "maximally truth-seeking" or "objective" as they simply tell us what we want to hear.

more on this whole project from last year nymag.com/intelligence...

18.02.2026 17:13 β€” πŸ‘ 93    πŸ” 9    πŸ’¬ 3    πŸ“Œ 1

Great question! 🦾 Let me look into that for you.

It appears that:

β€’ Comedy is legal again πŸš€

18.02.2026 17:05 β€” πŸ‘ 50    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

a helpful term is "recursive dispossession" (rob nichols's book is open access) library.oapen.org/handle/20.50...

18.02.2026 17:01 β€” πŸ‘ 35    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
an elon musk tweet comparing answers to the question "is the US on stolen land"

an elon musk tweet comparing answers to the question "is the US on stolen land"

three answers suggesting you might want to think about the question a little bit and one releasing you from ever thinking about anything again

18.02.2026 16:59 β€” πŸ‘ 1253    πŸ” 182    πŸ’¬ 77    πŸ“Œ 51

We are in an AI thought leadership bubble

17.02.2026 20:07 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

yeah, an social network of comments missing their post-parents

17.02.2026 20:44 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
By specifically referencing hedging, Selig draws a parallel between what Kalshi and Polymarket allow people to do and, say, how a farmer minimizes the risk of an unpredictable harvest by selling grain-futures contracts.
(Worried that a given candidate winning an election might hurt your business? Place a hefty bet on him or her on prediction markets to balance your risk profile β€” so goes this argument.) In doing so, the CFTC chair sounds an awful lot like prediction-market executives, who prefer to emphasize how their field is more useful to the world than, say, DraftKings. "I just don't really know what this has to do with gambling," Kalshi CEO Tarek Mansour told Axios last year. "Every contract has a hedging use case, even the less obvious ones," argued Kalshi's Samantha Schwab β€” of the Schwabs β€” around the same time. (Schwab has since been appointed deputy chief of staff for the U.S. Treasury Department.)
As a narrow regulatory matter, these assertions now seem temporarily settled in that it's the position of the government that Kalshi and Polymarket have nothing to do with gambling. (For lack of a better place to mention it, TIl state here that Donald Trump Jr. is an adviser to both companies and an investor in at least one.) But the resolution of these questions arrived just as it was becoming abundantly clear β€” in the numbers but also to anyone who has engaged with these platforms at all or knows anyone who has β€” that most of the action on the big prediction platforms revolves around sports. From The Wall Street Journal:
Kalshi and Polymarket, the biggest prediction-market platforms, have attracted attention for offering outlandish bets such as whether the Trump administration will buy Greenland ... But sports remain the overwhelming majority of their business, giving a sports-betting-crazed nation a new way to participate in America's favorite new pastime.

By specifically referencing hedging, Selig draws a parallel between what Kalshi and Polymarket allow people to do and, say, how a farmer minimizes the risk of an unpredictable harvest by selling grain-futures contracts. (Worried that a given candidate winning an election might hurt your business? Place a hefty bet on him or her on prediction markets to balance your risk profile β€” so goes this argument.) In doing so, the CFTC chair sounds an awful lot like prediction-market executives, who prefer to emphasize how their field is more useful to the world than, say, DraftKings. "I just don't really know what this has to do with gambling," Kalshi CEO Tarek Mansour told Axios last year. "Every contract has a hedging use case, even the less obvious ones," argued Kalshi's Samantha Schwab β€” of the Schwabs β€” around the same time. (Schwab has since been appointed deputy chief of staff for the U.S. Treasury Department.) As a narrow regulatory matter, these assertions now seem temporarily settled in that it's the position of the government that Kalshi and Polymarket have nothing to do with gambling. (For lack of a better place to mention it, TIl state here that Donald Trump Jr. is an adviser to both companies and an investor in at least one.) But the resolution of these questions arrived just as it was becoming abundantly clear β€” in the numbers but also to anyone who has engaged with these platforms at all or knows anyone who has β€” that most of the action on the big prediction platforms revolves around sports. From The Wall Street Journal: Kalshi and Polymarket, the biggest prediction-market platforms, have attracted attention for offering outlandish bets such as whether the Trump administration will buy Greenland ... But sports remain the overwhelming majority of their business, giving a sports-betting-crazed nation a new way to participate in America's favorite new pastime.

the trump administration is doing everything it can to clear the regulatory and legal path for prediction markets, which is says are definitely NOT sports betting, no way, couldn't be nymag.com/intelligence...

17.02.2026 14:42 β€” πŸ‘ 19    πŸ” 7    πŸ’¬ 2    πŸ“Œ 3

thanks tim!

17.02.2026 14:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
According to Dustin Gouker of The Closing Line, the 30-day trading volume on the prediction markets shortly before the Super Bowl was about $8.4 billion, of which
"$7.7 billion of that volume, or 91 percent," involved sports.
However you want to conceptualize or regulate these apps, you have to admit they're competing with online sports gambling, which was itself widely prohibited until 2018 and is now regulated on a state-by-state basis. According to Apptopia, in January, the Kalshi app got more downloads than FanDuel and DraftKings "have ever had in a single month." Do most of these downloaders understand themselves to be hedging against risks and enjoying the democratization of derivatives trading? Or are they trying to find a way to deal with their exposure to ... the Pats losing? To Mr. Beast saying "dollar" 21 times instead of ten? The CFTC says, "Maybe!" The users themselves say, mostly, lol. (Kalshi spokesperson Elisabeth Diana emailed to clarify: "Sports certainly has a hedging case β€” just last week, a sports insurance company, Game Point Capital, announced it's begun hedging on Kalshi, along with sports app PrizePicks." In a post on X, Mansour described how an insurance broker used Kalshi to hedge
"for two different teams against performance bonuses.")
Prediction markets are, in fact, growing quickly in other areas where they could have a strange and outsized influence β€” politics, in contrast with sports, doesn't really have a well-developed immune system to deal with this sort of thing. Over the next few years, though, the only barriers to the predictification of everything are lawsuits, some state-level regulations, and lobbying by the sports-betting industry, which clearly sees these platforms as direct competitors.

According to Dustin Gouker of The Closing Line, the 30-day trading volume on the prediction markets shortly before the Super Bowl was about $8.4 billion, of which "$7.7 billion of that volume, or 91 percent," involved sports. However you want to conceptualize or regulate these apps, you have to admit they're competing with online sports gambling, which was itself widely prohibited until 2018 and is now regulated on a state-by-state basis. According to Apptopia, in January, the Kalshi app got more downloads than FanDuel and DraftKings "have ever had in a single month." Do most of these downloaders understand themselves to be hedging against risks and enjoying the democratization of derivatives trading? Or are they trying to find a way to deal with their exposure to ... the Pats losing? To Mr. Beast saying "dollar" 21 times instead of ten? The CFTC says, "Maybe!" The users themselves say, mostly, lol. (Kalshi spokesperson Elisabeth Diana emailed to clarify: "Sports certainly has a hedging case β€” just last week, a sports insurance company, Game Point Capital, announced it's begun hedging on Kalshi, along with sports app PrizePicks." In a post on X, Mansour described how an insurance broker used Kalshi to hedge "for two different teams against performance bonuses.") Prediction markets are, in fact, growing quickly in other areas where they could have a strange and outsized influence β€” politics, in contrast with sports, doesn't really have a well-developed immune system to deal with this sort of thing. Over the next few years, though, the only barriers to the predictification of everything are lawsuits, some state-level regulations, and lobbying by the sports-betting industry, which clearly sees these platforms as direct competitors.

the "prediction markets democratize hedging" argument is true, as is "prediction markets are good at forecasting." the issue is that neither one of these arguments is particularly relevant to the main thing people are doing with them

17.02.2026 14:42 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
By specifically referencing hedging, Selig draws a parallel between what Kalshi and Polymarket allow people to do and, say, how a farmer minimizes the risk of an unpredictable harvest by selling grain-futures contracts.
(Worried that a given candidate winning an election might hurt your business? Place a hefty bet on him or her on prediction markets to balance your risk profile β€” so goes this argument.) In doing so, the CFTC chair sounds an awful lot like prediction-market executives, who prefer to emphasize how their field is more useful to the world than, say, DraftKings. "I just don't really know what this has to do with gambling," Kalshi CEO Tarek Mansour told Axios last year. "Every contract has a hedging use case, even the less obvious ones," argued Kalshi's Samantha Schwab β€” of the Schwabs β€” around the same time. (Schwab has since been appointed deputy chief of staff for the U.S. Treasury Department.)
As a narrow regulatory matter, these assertions now seem temporarily settled in that it's the position of the government that Kalshi and Polymarket have nothing to do with gambling. (For lack of a better place to mention it, TIl state here that Donald Trump Jr. is an adviser to both companies and an investor in at least one.) But the resolution of these questions arrived just as it was becoming abundantly clear β€” in the numbers but also to anyone who has engaged with these platforms at all or knows anyone who has β€” that most of the action on the big prediction platforms revolves around sports. From The Wall Street Journal:
Kalshi and Polymarket, the biggest prediction-market platforms, have attracted attention for offering outlandish bets such as whether the Trump administration will buy Greenland ... But sports remain the overwhelming majority of their business, giving a sports-betting-crazed nation a new way to participate in America's favorite new pastime.

By specifically referencing hedging, Selig draws a parallel between what Kalshi and Polymarket allow people to do and, say, how a farmer minimizes the risk of an unpredictable harvest by selling grain-futures contracts. (Worried that a given candidate winning an election might hurt your business? Place a hefty bet on him or her on prediction markets to balance your risk profile β€” so goes this argument.) In doing so, the CFTC chair sounds an awful lot like prediction-market executives, who prefer to emphasize how their field is more useful to the world than, say, DraftKings. "I just don't really know what this has to do with gambling," Kalshi CEO Tarek Mansour told Axios last year. "Every contract has a hedging use case, even the less obvious ones," argued Kalshi's Samantha Schwab β€” of the Schwabs β€” around the same time. (Schwab has since been appointed deputy chief of staff for the U.S. Treasury Department.) As a narrow regulatory matter, these assertions now seem temporarily settled in that it's the position of the government that Kalshi and Polymarket have nothing to do with gambling. (For lack of a better place to mention it, TIl state here that Donald Trump Jr. is an adviser to both companies and an investor in at least one.) But the resolution of these questions arrived just as it was becoming abundantly clear β€” in the numbers but also to anyone who has engaged with these platforms at all or knows anyone who has β€” that most of the action on the big prediction platforms revolves around sports. From The Wall Street Journal: Kalshi and Polymarket, the biggest prediction-market platforms, have attracted attention for offering outlandish bets such as whether the Trump administration will buy Greenland ... But sports remain the overwhelming majority of their business, giving a sports-betting-crazed nation a new way to participate in America's favorite new pastime.

the trump administration is doing everything it can to clear the regulatory and legal path for prediction markets, which is says are definitely NOT sports betting, no way, couldn't be nymag.com/intelligence...

17.02.2026 14:42 β€” πŸ‘ 19    πŸ” 7    πŸ’¬ 2    πŸ“Œ 3

I... had not noticed this, and it seems inarguable.

Algorithmic media and social media are completely different things.

Smashing the two together in different dilutions is leading us into weird places.

16.02.2026 15:47 β€” πŸ‘ 30    πŸ” 11    πŸ’¬ 0    πŸ“Œ 2
Post image

NEW POD: Went long with NYMAG tech columnist @jwherrman.bsky.social about how prediction markets are coming for political media, politics, and reality itself; the rise of the prediction market "sharp" as an aspirational career path; and what happens when we use markets to adjudicate reality

16.02.2026 18:35 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
The YouTube Vibecession By the numbers, everything is going great for creators. So why are so many of them scared it’s all about to fall apart?

Good @jwherrman.bsky.social article β€” and that's especially high praise when you're in the industry being covered. Opposite of Gell-Mann effect. nymag.com/intelligence...

13.02.2026 18:36 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

appreciate that!

13.02.2026 18:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[through a ring camera] have you heard the good news?
[through the open door] we're all going to hell!

13.02.2026 18:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profit or to maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning out deepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company
- but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO's lengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you - and the rest of us β€” are accelerating uncontrollably up a curve that's about to exceed its vertical axis.

Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profit or to maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning out deepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company - but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO's lengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you - and the rest of us β€” are accelerating uncontrollably up a curve that's about to exceed its vertical axis.

This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer's post mostly weren't seeing it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world
- where, to put it mildly, people are pretty keyed up β€” to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus.
This last category is represented at the end of 26-year-old Shumer's post by an unsatisfying litany of advice: "Lean into what's hardest to replace"; "Build the habit of adapting"; because while this all might sound very disruptive, your "dreams just got a lot closer"
The essay took the increasingly common experience of starting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass sharing and consumption. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we're talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass β€” as well as a somewhat more muted response from the kinds of core AI insiders whose positions he's summarizing β€” on a few things:
Shumer's last encounter with AI virality, which involved tuning a model of his own and being accused of misrepresenting its abilities, followed by an admission that he "got ahead of himself"; the post's LinkedIn-via-GPT structure, format, and illustration…

This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer's post mostly weren't seeing it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world - where, to put it mildly, people are pretty keyed up β€” to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus. This last category is represented at the end of 26-year-old Shumer's post by an unsatisfying litany of advice: "Lean into what's hardest to replace"; "Build the habit of adapting"; because while this all might sound very disruptive, your "dreams just got a lot closer" The essay took the increasingly common experience of starting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass sharing and consumption. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we're talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass β€” as well as a somewhat more muted response from the kinds of core AI insiders whose positions he's summarizing β€” on a few things: Shumer's last encounter with AI virality, which involved tuning a model of his own and being accused of misrepresenting its abilities, followed by an admission that he "got ahead of himself"; the post's LinkedIn-via-GPT structure, format, and illustration…

wrote about That AI Essay, the "scare trade," and safety researchers deciding to quit in public nymag.com/intelligence...

13.02.2026 15:42 β€” πŸ‘ 36    πŸ” 11    πŸ’¬ 7    πŸ“Œ 0

it's fine, society will adapt (by universally wearing rubber zuck masks in public)

13.02.2026 17:08 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There's a real "let's do everything they told us we couldn't" thing going on at Meta now. The facial recognition glasses bring to mind an old story I haven't seen mentioned today: Facebook built a feature like this a decade ago and didn't release it www.businessinsider.com/facebook-bui...

13.02.2026 16:51 β€” πŸ‘ 148    πŸ” 61    πŸ’¬ 6    πŸ“Œ 5
Anthropic's executives preferred to dwell on sunnier developments. Amodei frequently notes that he lost his father to an illness that has since proved treatable. An employee told me, in turn, that he doesn't worry about wearing sunscreen or getting his moles checked because Claude will cure all tumors. Not all the people on Amodei's payroll buy such speculation, but most of them expect that life as we know it will be wholly transformed. The researcher Sam Bowman told me he'd recently attended a picnic that had been autonomously organized by a gang of language models; they'd recruited a human volunteer to fetch a cake. Amodei envisions a "country of geniuses in a data center": millions of copies of Claude, each with the talents of John von Neumann. This does not seem like pure fantasy. In January, a Google engineer tweeted that a project that took her team an entire year had been accomplished by Claude in an hour.

Anthropic's executives preferred to dwell on sunnier developments. Amodei frequently notes that he lost his father to an illness that has since proved treatable. An employee told me, in turn, that he doesn't worry about wearing sunscreen or getting his moles checked because Claude will cure all tumors. Not all the people on Amodei's payroll buy such speculation, but most of them expect that life as we know it will be wholly transformed. The researcher Sam Bowman told me he'd recently attended a picnic that had been autonomously organized by a gang of language models; they'd recruited a human volunteer to fetch a cake. Amodei envisions a "country of geniuses in a data center": millions of copies of Claude, each with the talents of John von Neumann. This does not seem like pure fantasy. In January, a Google engineer tweeted that a project that took her team an entire year had been accomplished by Claude in an hour.

Put another way, I think stories of inevitability all kind of function like this β€”Β in one direction or another β€” producing *inaction* rather than response: www.newyorker.com/magazine/202...

13.02.2026 16:32 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

ha, yeah, a few people asked him about this on twitter and he was sorta like 🀷

13.02.2026 15:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
In the 1950s, lawmakers alarmed by "ever new and startling developments in automation" reported in "not only the trade magazines but the mass-circulation popular magazines," held hearings on the subject. The definition of automation, Resnikoff writes, was a "consistent concern" that haunted and ultimately helped neutralize the project. The panel concluded that "the economic significance of the automation movement is not to be judged or limited by the precision of its definition" and that, in any case, they were "clearly on the threshold of an industrial age." Awed by the future yet unable to agree on how to describe it, the committee concluded that "no specific broad-gauge economic legislation appears to be called for," appealing to "enlightened management" to mitigate potential displacement and harms and warning labor leaders against "a blind defense of the status quo."
After much deliberation, in other words, the imminent remaking of the economy, and humanity's place in the world, was reduced to an awareness campaign. Broad-based "automation" was just a matter of time. It wasn't the government's place to tell businesses how to handle it, and it wasn't the businesses' place to do anything but enable it to its maximum potential, just ... carefully. Automation framed the future in terms that made asking for things in the present β€” marginally better terms for workers, for example
β€” sound like a waste of energy. It was, Resnikoff suggests, an argument for abandoning work, and the workplace, as a contestable, organizable, political space. Why bother? The end of work is nigh.

In the 1950s, lawmakers alarmed by "ever new and startling developments in automation" reported in "not only the trade magazines but the mass-circulation popular magazines," held hearings on the subject. The definition of automation, Resnikoff writes, was a "consistent concern" that haunted and ultimately helped neutralize the project. The panel concluded that "the economic significance of the automation movement is not to be judged or limited by the precision of its definition" and that, in any case, they were "clearly on the threshold of an industrial age." Awed by the future yet unable to agree on how to describe it, the committee concluded that "no specific broad-gauge economic legislation appears to be called for," appealing to "enlightened management" to mitigate potential displacement and harms and warning labor leaders against "a blind defense of the status quo." After much deliberation, in other words, the imminent remaking of the economy, and humanity's place in the world, was reduced to an awareness campaign. Broad-based "automation" was just a matter of time. It wasn't the government's place to tell businesses how to handle it, and it wasn't the businesses' place to do anything but enable it to its maximum potential, just ... carefully. Automation framed the future in terms that made asking for things in the present β€” marginally better terms for workers, for example β€” sound like a waste of energy. It was, Resnikoff suggests, an argument for abandoning work, and the workplace, as a contestable, organizable, political space. Why bother? The end of work is nigh.

AGI, like G-less AI, automation, and even mechanization, are indeed stories, but they're also sequels: This time, the technology isn't just inconceivable and inevitable; it's anthropomorphized and given a will of its own. If mechanization conjured images of factories, automation conjured images of factories without people, and AI conjured humanoid machine assistants, AGI and ASI conjure an economy, and a wider world, in which humans are either made limitlessly rich and powerful by superhuman machines or dominated and subjugated (or perhaps even killed) by them (Industrial Revolution 3: The Robot Awakens). In imagining centralized machine authoritarianism in the future, AGI creates a sort of authoritarian, exclusionary discourse now. A narrative emerges in which the decisions of AGI stakeholders β€” Al firms, their investors, and maybe a few government leaders β€” are all that matter. The rest of us inhabit the roles of subject and audience but not author.

AGI, like G-less AI, automation, and even mechanization, are indeed stories, but they're also sequels: This time, the technology isn't just inconceivable and inevitable; it's anthropomorphized and given a will of its own. If mechanization conjured images of factories, automation conjured images of factories without people, and AI conjured humanoid machine assistants, AGI and ASI conjure an economy, and a wider world, in which humans are either made limitlessly rich and powerful by superhuman machines or dominated and subjugated (or perhaps even killed) by them (Industrial Revolution 3: The Robot Awakens). In imagining centralized machine authoritarianism in the future, AGI creates a sort of authoritarian, exclusionary discourse now. A narrative emerges in which the decisions of AGI stakeholders β€” Al firms, their investors, and maybe a few government leaders β€” are all that matter. The rest of us inhabit the roles of subject and audience but not author.

last year, i wrote about howΒ gathering periods of technological change into a single, fearsome, approaching threshold can work against the stated goals of doing so β€”Β an awareness campaign for everyone's own helplessness nymag.com/intelligence...

13.02.2026 15:42 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0