Rob Horning

Rob Horning

@robhorning.bsky.social

robhorning.substack.com

4,689 Followers 266 Following 565 Posts Joined Jul 2023
23 hours ago

the people taking the "is it AI" test are also positioned to act as though context and purpose don't matter to writing's "quality" and that it is ok to make evaluations from a position of ignorance (most polls also do this)

9 0 0 0
23 hours ago
Quizzes like the Times’ are games that L.L.M.s are designed to excel at. A.I. writing is literally optimized to be the writing most people prefer in A/B preference tests; the main thing an L.L.M. chatbot “wants” when replying is to be generating the text that its users would choose as a good answer to the prompt over all other possibilities.

AI writing is optimized to seem "good" to people who could care less about the context or the larger point the writing is supposed to serve
maxread.substack.com/p/what-do-wh...

8 2 1 0
3 days ago

that promise is never fulfilled, and consuming "personalized" content (including chatbot "friendship") just makes the emptiness emptier, more isolating

9 1 0 0
3 days ago

I tend to think even "content" depends on some shared social interest in the material it conveys, but slop can seem predicated on eliminating that — it promises you can consume and enjoy it in isolation like "digital drugs"

11 2 1 0
3 days ago
People want more content than humans can supply. This does not mean that, taken as a collective whole, there is not enough content in the universe of extant media. Rather, for any given person, there is content they would be willing—even eager—to consume that is not yet being provided. However, making this content comes at a cost. The
space of hyper-specialized content people would consume is vast. But for most of this niche content the audience is too small to justify its production. In economics, the balance between the benefit to potential consumers and the cost of production is called optimum product diversity

this claims that there is no theoretical limit to the demand for hyper-personalized content; but the actual demand for actual content is dictated by social relations (desiring the desire of the other, etc.) arxiv.org/pdf/2601.06060

24 5 2 3
3 days ago

Wild honey pie

3 0 0 0
6 days ago

Anything medicine or botanic, no matter how obscure, is accepted but “inanition” and “ontic” have no being

3 0 0 0
6 days ago
Post image

What I feel when this gets denied as a word

4 0 1 0
1 week ago

AI slop is a potent strategy of warfare in spreading disinfo and sowing doubt. Quickly produced at scale, AI slop drowns out human-made content and goes viral as spammers detect and exploit weaknesses in algorithms.

67 25 1 0
1 week ago
The reliably credulous New York Times described prediction markets as "infrastructure for the legitimacy of event outcomes", echoing the confidence of company founders, and went on to say, "Platforms like Polymarket and Kalshi now allow users to bet on virtually any outcome of a future event." Coverage like this cites the range of disparate possible bets (Oscar nominations, Grammy winners, how often Elon Musk will tweet this month, whether Israel will strike Iran) as proof of unlimited potential. This is despite the fact that the majority of trade volume is on sports outcomes. Non-sports events don't generate enough trade volume to be worthwhile for the platform or a user. Platforms decide what events are available to trade on, and they determine the terms of payouts. Speed of growth does not indicate long-term viability, but rather investor pressure to demonstrate proof of concept at increasing scale and attract more users every day.

at some point athletes throwing games will be fully rehabilitated as offering a form of insider knowledge to help the world predict who's not cheating www.tank.tv/magazine/iss...

4 0 0 0
2 weeks ago
he 2020’s defining feature, a statement of the real you against the horrific “better you” that was sold to you by crumbling systems. A.I. Is Giving You a Personalized Internet, but You Have No Say in It
The relentless addition of artificial intelligence in popular apps raises questions about what’s at stake. The answer: the future of the internet and its lifeblood, digital advertising.

"personalization" is manipulation 1234kyle5678.substack.com/p/enter-the-...
www.nytimes.com/2026/02/10/t...

9 2 0 0
2 weeks ago

but this attitude toward productivity necessarily becomes increasingly abstract, where one is "productive" for the sake of being productive (and not without any intention in mind, just like the LLMs), as if that weren't the apotheosis of alienation

8 2 0 0
2 weeks ago

that is how capitalists relate to the labor they hire, so it makes sense of an aspiration perhaps if you want to become an labor exploiter rather than a maker of things

11 4 1 0
2 weeks ago

you too can become a crypto-stakhanovite who becomes "more productive" by subtracting more of oneself from intellectual processes, and instead claiming a kind of vicarious relation to work: "I watched this work happen, so I in fact did it"

10 5 2 0
2 weeks ago
You will start to notice a certain thematic consistency among the most bullish AI advocates: a hyper-personalized fixation on personal productivity gains to the occlusion of all else. Including factual reality, ethics, and foundational empathy.

being unable to conceive of any other goals than maximizing one's "personal productivity" must be a terrible way to go through life

58 14 1 1
2 weeks ago
Preview
People Loved the Dot-Com Boom. The A.I. Boom, Not So Much.

From this article about how “AI” puts people off who don’t see having their skills replaced as “convenient” www.nytimes.com/2026/02/21/t...

2 0 0 0
2 weeks ago
No wonder Mr. Thomas, who is 78, often feels frustrated. He fantasizes about punching a young tech worker in face. And yet. He had ChatGPT write a speech for his wife's birthday. It was beautiful and eloquent.
All of which means the future of A.I. could probably go either way.

“And yet” should be changed to “because”—people can see that they will use these tools to the detriment of their own relationships

5 0 1 0
3 weeks ago
I have argued in the past that people are overestimating the organisational level benefits of AI because they are extrapolating from individual experiences, and speeding up production behind a bottleneck doesn’t increase output (although it might reduce it). But one thing I haven’t emphasised enough is that bottlenecks are not natural obstacles – they are, in most cases, the consequence of increasing production until you hit a bottleneck. If AI removes a bunch of bottlenecks, that won’t be used to produce the same output faster and cheaper, it will be used to produce a lot more output until a new bottleneck is reached and requires human intervention. (Weirdly, there was a two week period after the announcement of DeepSeek when all the techbros were wailing at their share prices and shouting “its Jevons Paradox you idiots”, but this got really quickly forgotten).

evergreen lesson of automation: it's not a way of eliminating bottlenecks but of inventing new and more resistant ones backofmind.substack.com/p/finally-we...

5 1 0 0
7 months ago

The most depressing AI pieces are always going to be the thoughtful, nuanced, open-minded considerations by respected writers who are transparently responding to the publicity incentive created by editors whose owners want this kind of content

517 97 2 7
3 weeks ago

article sugsests that algorithmic feeds are (1) optimized for how much "falsity" users prefer, inculcating the sense that no truths are universally shared and (2) are designed to maintain perpetual conflict among users to reinforce affect over the emergence of common ground, which is unprofitable

6 0 0 0
3 weeks ago
Vectofascism does not simply produce a lie. What characterizes it is rather the production of a calculated undecidability, of a gray zone where the very status of the statement becomes indeterminable. Post-truth is not the absence of truth but its submersion in a flow of contradictory information whose sorting would require a cognitive effort exceeding available attentional capacities. This strategy exploits a fundamental asymmetry: it is always more costly in terms of cognitive resources to verify a statement than to produce it. Producing a complex lie costs a few seconds ; demystifying it can require hours of research. Vectofascism pushes this logic to the point of transforming veracity itself into a variable of algorithmic optimization. The question is no longer ‘is it true?’ but ‘what degree of veracity will maximize engagement for this specific segment?’

a.k.a. truth and falsity in their ultramoral sense
carrier-bag.net/vectofascism...

12 2 1 2
1 month ago

GenAI is a key activator in the Anti-Vice Popular Front: its output & industry also tell you the rules are over. The increasing volume of synthetic content contributes to the broken windows theory of the information landscape: "the more your environment is vandalised, the less care you take of it."

30 5 1 0
1 month ago
Preview
Loneliness Generators The lonelier you are, the further you can run.

an essay I wrote last year about AI "companions" www.emptysetmag.com/articles/lon...

14 4 3 0
1 month ago

It’s a category mistake nobody really talks about: most AI companies are not trying to sell creative tools, they are trying to sell content streams.

79 13 3 2
1 month ago

👇🏻

56 8 2 0
1 month ago
Post image

some of her other aliases

5 0 1 0
1 month ago

also wonder if her LLM wears sunglasses at night

5 0 1 0
1 month ago
Last February, the writer Coral Hart launched an experiment. She started using artificial intelligence programs to quickly churn out romance novels.

Over the next eight months, she created 21 different pen names and published dozens of novels. In the process, she discovered the limitations of using chatbots to write about sex and love.

Why would anyone buy an AI-written romance novel when you can just prompt the chatbots yourself and "write" your own? www.nytimes.com/2026/02/08/b...

24 2 1 1
1 month ago

though I doubt "AI" will be replaced widely with "probabilistic automation" it probably should be. (I awkwardly try to put "AI" in quotes when I use it but often have given in to anthropomophizing usage)

5 0 0 0
1 month ago

also seems like a good guide to what writing about AI not to take so seriously; shows who is either not thinking carefully enough about the topic or is deliberately writing obfuscatory hype

3 1 1 0