Rob Horning's Avatar

Rob Horning

@robhorning.bsky.social

robhorning.substack.com

4,689 Followers  |  265 Following  |  556 Posts  |  Joined: 05.07.2023
Posts Following

Posts by Rob Horning (@robhorning.bsky.social)

he 2020’s defining feature, a statement of the real you against the horrific “better you” that was sold to you by crumbling systems.

he 2020’s defining feature, a statement of the real you against the horrific “better you” that was sold to you by crumbling systems.

A.I. Is Giving You a Personalized Internet, but You Have No Say in It
The relentless addition of artificial intelligence in popular apps raises questions about what’s at stake. The answer: the future of the internet and its lifeblood, digital advertising.

A.I. Is Giving You a Personalized Internet, but You Have No Say in It The relentless addition of artificial intelligence in popular apps raises questions about what’s at stake. The answer: the future of the internet and its lifeblood, digital advertising.

"personalization" is manipulation 1234kyle5678.substack.com/p/enter-the-...
www.nytimes.com/2026/02/10/t...

23.02.2026 00:15 — 👍 9    🔁 2    💬 0    📌 0

but this attitude toward productivity necessarily becomes increasingly abstract, where one is "productive" for the sake of being productive (and not without any intention in mind, just like the LLMs), as if that weren't the apotheosis of alienation

22.02.2026 21:27 — 👍 8    🔁 2    💬 0    📌 0

that is how capitalists relate to the labor they hire, so it makes sense of an aspiration perhaps if you want to become an labor exploiter rather than a maker of things

22.02.2026 21:27 — 👍 11    🔁 4    💬 1    📌 0

you too can become a crypto-stakhanovite who becomes "more productive" by subtracting more of oneself from intellectual processes, and instead claiming a kind of vicarious relation to work: "I watched this work happen, so I in fact did it"

22.02.2026 21:23 — 👍 10    🔁 5    💬 2    📌 0
You will start to notice a certain thematic consistency among the most bullish AI advocates: a hyper-personalized fixation on personal productivity gains to the occlusion of all else. Including factual reality, ethics, and foundational empathy.

You will start to notice a certain thematic consistency among the most bullish AI advocates: a hyper-personalized fixation on personal productivity gains to the occlusion of all else. Including factual reality, ethics, and foundational empathy.

being unable to conceive of any other goals than maximizing one's "personal productivity" must be a terrible way to go through life

22.02.2026 21:23 — 👍 58    🔁 15    💬 1    📌 1
Preview
People Loved the Dot-Com Boom. The A.I. Boom, Not So Much.

From this article about how “AI” puts people off who don’t see having their skills replaced as “convenient” www.nytimes.com/2026/02/21/t...

22.02.2026 16:23 — 👍 2    🔁 0    💬 0    📌 0
No wonder Mr. Thomas, who is 78, often feels frustrated. He fantasizes about punching a young tech worker in face. And yet. He had ChatGPT write a speech for his wife's birthday. It was beautiful and eloquent.
All of which means the future of A.I. could probably go either way.

No wonder Mr. Thomas, who is 78, often feels frustrated. He fantasizes about punching a young tech worker in face. And yet. He had ChatGPT write a speech for his wife's birthday. It was beautiful and eloquent. All of which means the future of A.I. could probably go either way.

“And yet” should be changed to “because”—people can see that they will use these tools to the detriment of their own relationships

22.02.2026 16:19 — 👍 5    🔁 0    💬 1    📌 0
I have argued in the past that people are overestimating the organisational level benefits of AI because they are extrapolating from individual experiences, and speeding up production behind a bottleneck doesn’t increase output (although it might reduce it). But one thing I haven’t emphasised enough is that bottlenecks are not natural obstacles – they are, in most cases, the consequence of increasing production until you hit a bottleneck. If AI removes a bunch of bottlenecks, that won’t be used to produce the same output faster and cheaper, it will be used to produce a lot more output until a new bottleneck is reached and requires human intervention. (Weirdly, there was a two week period after the announcement of DeepSeek when all the techbros were wailing at their share prices and shouting “its Jevons Paradox you idiots”, but this got really quickly forgotten).

I have argued in the past that people are overestimating the organisational level benefits of AI because they are extrapolating from individual experiences, and speeding up production behind a bottleneck doesn’t increase output (although it might reduce it). But one thing I haven’t emphasised enough is that bottlenecks are not natural obstacles – they are, in most cases, the consequence of increasing production until you hit a bottleneck. If AI removes a bunch of bottlenecks, that won’t be used to produce the same output faster and cheaper, it will be used to produce a lot more output until a new bottleneck is reached and requires human intervention. (Weirdly, there was a two week period after the announcement of DeepSeek when all the techbros were wailing at their share prices and shouting “its Jevons Paradox you idiots”, but this got really quickly forgotten).

evergreen lesson of automation: it's not a way of eliminating bottlenecks but of inventing new and more resistant ones backofmind.substack.com/p/finally-we...

20.02.2026 14:31 — 👍 5    🔁 1    💬 0    📌 0

The most depressing AI pieces are always going to be the thoughtful, nuanced, open-minded considerations by respected writers who are transparently responding to the publicity incentive created by editors whose owners want this kind of content

19.07.2025 14:43 — 👍 517    🔁 97    💬 2    📌 7

article sugsests that algorithmic feeds are (1) optimized for how much "falsity" users prefer, inculcating the sense that no truths are universally shared and (2) are designed to maintain perpetual conflict among users to reinforce affect over the emergence of common ground, which is unprofitable

17.02.2026 21:28 — 👍 6    🔁 0    💬 0    📌 0
Vectofascism does not simply produce a lie. What characterizes it is rather the production of a calculated undecidability, of a gray zone where the very status of the statement becomes indeterminable. Post-truth is not the absence of truth but its submersion in a flow of contradictory information whose sorting would require a cognitive effort exceeding available attentional capacities. This strategy exploits a fundamental asymmetry: it is always more costly in terms of cognitive resources to verify a statement than to produce it. Producing a complex lie costs a few seconds ; demystifying it can require hours of research. Vectofascism pushes this logic to the point of transforming veracity itself into a variable of algorithmic optimization. The question is no longer ‘is it true?’ but ‘what degree of veracity will maximize engagement for this specific segment?’

Vectofascism does not simply produce a lie. What characterizes it is rather the production of a calculated undecidability, of a gray zone where the very status of the statement becomes indeterminable. Post-truth is not the absence of truth but its submersion in a flow of contradictory information whose sorting would require a cognitive effort exceeding available attentional capacities. This strategy exploits a fundamental asymmetry: it is always more costly in terms of cognitive resources to verify a statement than to produce it. Producing a complex lie costs a few seconds ; demystifying it can require hours of research. Vectofascism pushes this logic to the point of transforming veracity itself into a variable of algorithmic optimization. The question is no longer ‘is it true?’ but ‘what degree of veracity will maximize engagement for this specific segment?’

a.k.a. truth and falsity in their ultramoral sense
carrier-bag.net/vectofascism...

17.02.2026 21:20 — 👍 12    🔁 2    💬 1    📌 2

GenAI is a key activator in the Anti-Vice Popular Front: its output & industry also tell you the rules are over. The increasing volume of synthetic content contributes to the broken windows theory of the information landscape: "the more your environment is vandalised, the less care you take of it."

11.02.2026 13:09 — 👍 30    🔁 5    💬 1    📌 0
Preview
Loneliness Generators The lonelier you are, the further you can run.

an essay I wrote last year about AI "companions" www.emptysetmag.com/articles/lon...

29.01.2026 18:34 — 👍 14    🔁 4    💬 3    📌 0

It’s a category mistake nobody really talks about: most AI companies are not trying to sell creative tools, they are trying to sell content streams.

10.02.2026 22:39 — 👍 80    🔁 15    💬 3    📌 2

👇🏻

10.02.2026 20:30 — 👍 56    🔁 8    💬 2    📌 0
Post image

some of her other aliases

10.02.2026 21:25 — 👍 5    🔁 0    💬 1    📌 0

also wonder if her LLM wears sunglasses at night

10.02.2026 21:22 — 👍 5    🔁 0    💬 1    📌 0
Last February, the writer Coral Hart launched an experiment. She started using artificial intelligence programs to quickly churn out romance novels.

Over the next eight months, she created 21 different pen names and published dozens of novels. In the process, she discovered the limitations of using chatbots to write about sex and love.

Last February, the writer Coral Hart launched an experiment. She started using artificial intelligence programs to quickly churn out romance novels. Over the next eight months, she created 21 different pen names and published dozens of novels. In the process, she discovered the limitations of using chatbots to write about sex and love.

Why would anyone buy an AI-written romance novel when you can just prompt the chatbots yourself and "write" your own? www.nytimes.com/2026/02/08/b...

10.02.2026 21:20 — 👍 24    🔁 2    💬 1    📌 1

though I doubt "AI" will be replaced widely with "probabilistic automation" it probably should be. (I awkwardly try to put "AI" in quotes when I use it but often have given in to anthropomophizing usage)

05.02.2026 16:11 — 👍 5    🔁 0    💬 0    📌 0

also seems like a good guide to what writing about AI not to take so seriously; shows who is either not thinking carefully enough about the topic or is deliberately writing obfuscatory hype

05.02.2026 16:08 — 👍 3    🔁 1    💬 1    📌 0
Our suggestions for alternatives to anthropomorphizing language are presented in Table 3. For two categories, Emotion and Human role analogy, we found that there were no simple replacements. Rather, when faced with such anthropomorphizing language, the best bet was a larger reframing of the matter at hand. For the others, we could often find simple edits within a sentence, as exemplified in the Table 3.

Our suggestions for alternatives to anthropomorphizing language are presented in Table 3. For two categories, Emotion and Human role analogy, we found that there were no simple replacements. Rather, when faced with such anthropomorphizing language, the best bet was a larger reframing of the matter at hand. For the others, we could often find simple edits within a sentence, as exemplified in the Table 3.

useful chart for "deanthropomorphizing" discussions of "AI" firstmonday.org/ojs/index.ph...

05.02.2026 16:03 — 👍 19    🔁 8    💬 1    📌 1

it seems self-confirming that when you can't stop looking at a feed it seems to prove that it really is "for you" and about you even if it is mostly the same as every other feed; but this fails to explain what makes any particular content compelling. Maybe it is just context-free sensationalism

03.02.2026 17:37 — 👍 3    🔁 0    💬 0    📌 0

social connection is often mundane and trite and requires lots of effort and polite reciprocity, which over time builds a foundation of collective meaning; algorithmic feeds drive all of that out and replace collective meaning with instantaneous, atomizing hyperpersonal overstimulation

03.02.2026 17:16 — 👍 11    🔁 0    💬 1    📌 0

corresponding to the "program of systematic desocialization" is the creation of a social vacuum in people's lives that time on platforms enlarges; to fill it, users seem to turn to more extreme affective content (conspiracy, political antagonism, hate speech, parasocial trolling, gooning, etc.)

03.02.2026 17:12 — 👍 2    🔁 0    💬 1    📌 0

but the corollary is always that feeds train users to think other people are no more than content and can only be consumed as content ("influencing") rather than "connected with" — or that "connection" means seeing an entity's content.
i.e. "a program of systematic desocialization"

03.02.2026 17:07 — 👍 6    🔁 3    💬 1    📌 0

in the past I usually would harp on the idea that users got to consume some algorithmically inferred and constructed version of themselves through feeds and this obscured whatever else was in the content directed at them; it just trained you to think everything was always about you

03.02.2026 17:03 — 👍 3    🔁 0    💬 1    📌 0
Preview
I Write the Songs — Real Life On algorithmic culture and the creation of coercive “fun”

maybe what I was trying to say here reallifemag.com/i-write-the-...

03.02.2026 16:59 — 👍 3    🔁 0    💬 1    📌 0

this was obvious from the advent of TikTok, that it was meant to isolate and alienate users and abolish "social" aspect of social media; perhaps pandemic isolation muddled that for a time with the idea that people needed platforms to simulate social experience

03.02.2026 16:55 — 👍 8    🔁 0    💬 1    📌 0
If internet firms are defined by their fastest-growing monetized products, well, Meta is basically a Reels company, one that successfully chased TikTok into continued relevance, allowing Mark Zuckerberg to throw money at his next big chase (into generative AI). This isn’t just a formal change from the News Feed to Stories to predictively recommended vertical videos, though. It’s a long (and nearly complete) process of platform desocialization. Platforms originally defined by keeping up with people you know, or have at least heard of, become something fundamentally different.

If internet firms are defined by their fastest-growing monetized products, well, Meta is basically a Reels company, one that successfully chased TikTok into continued relevance, allowing Mark Zuckerberg to throw money at his next big chase (into generative AI). This isn’t just a formal change from the News Feed to Stories to predictively recommended vertical videos, though. It’s a long (and nearly complete) process of platform desocialization. Platforms originally defined by keeping up with people you know, or have at least heard of, become something fundamentally different.

"desocialization" (like "social deskilling") is a good term for what was once talked about in terms of "social graph" vs. "interest graph," or of "algorithmic recommendation"
nymag.com/intelligence...

03.02.2026 16:54 — 👍 51    🔁 16    💬 1    📌 0

not sure I understand how anything works anymore

01.02.2026 23:06 — 👍 0    🔁 0    💬 0    📌 0