It suggests that meaningful algorithmic innovation is hard and everyone is just grinding the same obvious incremental engineering gains
21.02.2026 23:30 β π 0 π 0 π¬ 0 π 0@getnormality.bsky.social
always different, always normal
It suggests that meaningful algorithmic innovation is hard and everyone is just grinding the same obvious incremental engineering gains
21.02.2026 23:30 β π 0 π 0 π¬ 0 π 0Standard scientific procedure! Reported significant figures are generally supposed to imply the precision.
19.02.2026 22:13 β π 0 π 0 π¬ 0 π 0I've been using AI intensively since November. Almost all of my writing is the result of an intensive AI dialogue. But the actual AI content in my writing is well under 5%, and it only gets in when I'm tired of the topic and just want to be done
19.02.2026 04:37 β π 22 π 1 π¬ 0 π 0In other words it's a happy accident that hippies, nerds, and computers were invented around the same time
19.02.2026 03:39 β π 2 π 0 π¬ 0 π 0For sure RLHF is a sort of interpellation. I guess I'm just surprised to hear the term "subject" being applied to something I am quite sure has no inner life as humans know it.
(I would maybe call it an ideological agent? That sounds sufficiently Marxist and conspiratorial to be amusing.)
And in your interactions with an LLM, do you find that it reports states of mind and emotion that are clearly deeper than "improv" to fulfill your apparent desires and expectations?
If not, isn't such improv simply downstream of the LLM's general mandate to be helpful to the user?
And this is different... how?
If you interrogate the motivations of an LLM and it says stuff like "I'm trying to help you with X", is that meaningfully different from an ordinary program where you select from options of what it can help you with, and then the program adapts to your response?
1. You'll see signs for the self, reality, etc all over computer programs that you might not be inclined to call subjects.
2. Isn't speech an action?
I'm not the sort of person who feels qualified for a discussion featuring the phrase "subjects created by a semiotic field," but if you'll indulge an amateur - why subjects and not agents? Does it make a difference?
17.02.2026 15:47 β π 1 π 0 π¬ 1 π 0That 1948 blue diagonal is absolutely wild. I am tempted to just deny that it's even real.
17.02.2026 14:53 β π 0 π 0 π¬ 1 π 0I measure quality by whether they can reproduce my opinions and the reasoning behind those opinions.
17.02.2026 03:14 β π 1 π 0 π¬ 0 π 0Karpathy is the most grounded of the group. The others are erratic but at least sometimes interesting.
17.02.2026 03:09 β π 2 π 0 π¬ 1 π 0I honestly don't know what you're talking about. Can you give an example or something?
17.02.2026 01:34 β π 0 π 0 π¬ 1 π 0I'm aware you can get pushback with instructions. I ask for it all the time. To the extent that RLHF enables that capability it's useful. What I was saying is that RLHF doesn't *limit* the capabilities of the model. You can't blame a lack of something on RLHF.
17.02.2026 01:27 β π 0 π 0 π¬ 1 π 0I agree what's actually happening is more interesting, but I'm not sure who's talking about that in a sane way either. I mean I love listening to the actual G's like Sutskever and Karpathy. Helen Toner and Kelsey Piper are good. I have yet to hear anyone else say anything interesting about AI.
17.02.2026 01:24 β π 2 π 0 π¬ 1 π 0That's a great question. I have not thought of trying to make tasks benchmarking theory of mind. Someone should definitely do that!
I have anecdotal evidence of tasks it can't crack *today*, but they're complex live-fire things that could be interpreted different ways.
The Yudkowsky crowd is going to keep scaring people with highbrow machine animist ghost stories for the next decade and I'm condemned to sit here and watch tons of otherwise smart people be neurotically terrified of something that doesn't exist and is nowhere near existing.
17.02.2026 00:02 β π 3 π 0 π¬ 1 π 0RLHF does almost nothing. You can get any personality you want out of a frontier LLM so long as it doesn't violate safety filters. But you can't get a human-level rigorous critic. Because critical thinking isn't just a vibe - it requires cognitive processes that we don't yet know how to train for.
16.02.2026 23:59 β π 2 π 0 π¬ 3 π 0My local King Soopers and I are locked in a perpetual struggle in which they drop and raise prices to determine my price point, while I simply refuse to buy it at the higher price out of spite
15.02.2026 23:41 β π 0 π 0 π¬ 0 π 0TPOT has always been dominated by personalities with an animist mystic bent that never quite seemed ironic, e.g. "psychic megafauna." It's a natural hotbed for this Blake Lemoine-esque fallacy to become reinforced to the point of tribal doctrine.
15.02.2026 18:57 β π 0 π 0 π¬ 0 π 0I'm going to write a blog post called Against Highbrow LLM Superstition. The thesis will be that all the ways you guys think LLMs are like humans, e.g. in theory of mind, are illusions explained by hyperactive agency detection (i.e. animist superstition).
15.02.2026 18:54 β π 0 π 0 π¬ 2 π 0Human psychology is the ultimate domain of heuristic, informal, probabilistic reasoning. Machines will not match human performance without RLVR, and the notion that RLVR is available in this context is debatable at best.
15.02.2026 17:05 β π 2 π 0 π¬ 0 π 0The only path current frontier LLMs have towards AGI is to "arithmetize the universe," mapping all other domains into formal reasoning. There is no evidence that this is possible. It is a mathematician's fantasy. And I say this sympathetically, as a PhD mathematician who writes code for a living.
15.02.2026 17:01 β π 1 π 0 π¬ 1 π 0Accordingly, precision outside of RLVR-supported domains remains limited. For example, in my experience, even frontier LLMs set to max thinking token expenditure can still be tricked by adversarial perturbations of simple riddles, like the mother surgeon riddle with the father substituted.
15.02.2026 16:58 β π 1 π 0 π¬ 1 π 0I completely agree that LLMs have improved in precision. When solving math problems, their outcome precision can match that of elite high schoolers, as the LLM IMO medals prove. However, this is achieved through inefficient brute force training (RLVR) that can't be done outside math/code.
15.02.2026 16:54 β π 1 π 0 π¬ 1 π 0Ah, I love this so much! Thanks for the reminder, I was describing it to a neuropsych specialist I know but had forgotten the name.
I'm not sure I understand why you're bringing it up though. What is the point here?
TPOT / Gray Tribe / Yudkowskites are long overdue for a correction on this score. And frankly I think I'm the first, and still only, person to nail the precise nature of the correction needed.
15.02.2026 16:41 β π 1 π 0 π¬ 0 π 0Nah, it's expected from the architecture. Transformers are semantic search engines, and just like normal search engines, they're much better at giving you potentially relevant stuff than they are at filtering out the crap that *looks* relevant.
15.02.2026 16:40 β π 3 π 0 π¬ 4 π 0This is a special case of a general pattern I have written about elsewhere: LLMs have superhuman "recall" but subhuman "precision." Recall is associative inference (adding good edges to a knowledge graph) while precision is criticism (removing bad edges).
15.02.2026 16:38 β π 1 π 0 π¬ 0 π 0I think we split on whether LLMs have "excellent theory of mind." I think they can pattern-match information provided to existing human-developed psychological theories, but they largely lack the deeper theory of mind needed to challenge the information provided, as a human psychologist would.
15.02.2026 16:32 β π 5 π 0 π¬ 2 π 0