huggingface.co/papers/2602....
02.03.2026 21:58 β π 13 π 1 π¬ 0 π 0huggingface.co/papers/2602....
02.03.2026 21:58 β π 13 π 1 π¬ 0 π 0Good correction, so maybe a more representative metric would total up the number enterprise plans anthropic is selling or big deals they've made. Which we don't have unfortunately.
02.03.2026 22:37 β π 1 π 0 π¬ 0 π 0
If AI automates lots of routine physical tasks, developing countries may lose a path out of poverty (beat me to something I wanted to write darn).
www.theargumentmag.com/p/we-may-mis...
Didn't realize Anthropic was beating OpenAI on so hard on Enterprise sales.
Source: x.com/arakharazian...
My model of Scott is that he says things he thinks are true. He may be withholding unsavory opinions, but heβs not usually opposing stuff that he secretly likes.
This should be the default assumption for most people btw.
Comments on this post take the position that Scott argues against stuff as a way to secretly promote it. I don't think that's true. That would be weird.
02.03.2026 21:55 β π 0 π 0 π¬ 1 π 0
Ok itβs computationally intensive but
mino.mobi/cluster
Calculates the largest group of your follows who all follow each other. Your densest subgraph.
And then lets you publish as a list (which I hope to refer to in further analysis)
Hereβs mine bsky.app/profile/did:...
TIL that Signal uses deniable authentication of messages AKA Off-The Record messaging.
That means that while you can verify that a message came from someone, you can't credibly share that message with anyone else. Cool!!
This war is blatantly unconstitutional - undeniably a large enough military action to require congressional authorization. Its wisdom and morality are closer calls. But I'm skeptical regime change (which would be good) can be achieved by air attack alone: reason.com/volokh/2026/...
28.02.2026 18:37 β π 25 π 12 π¬ 1 π 2
I assume this wasn't intentional, but the latter half of this reads as casting aspersions on someone based on their sexual interests.
That goes against my value of sex-positivity and I'd like to see less of it in the future.
The human alignment problem has surpassed the AI alignment problem in importance.
27.02.2026 23:07 β π 25 π 4 π¬ 2 π 1
New paper on a long-shot I've been obsessed with for a year:
How much are AI reasoning gains confounded by expanding the training corpus 10000x? How much LLM performance is down to "shallow" generalisation (approximate pattern-matching to highly-related training data)?
t.co/CH2vP0Y7OF
Hmm croque madame might get near the dark breakfast section
27.02.2026 16:25 β π 2 π 0 π¬ 0 π 0I feel the milk corner should include dairy more generally so we can discuss things like croissants, mousse, buttered toast etc.
27.02.2026 16:22 β π 2 π 0 π¬ 1 π 0
Well, math terminology being what it is, something like this was bound to happen eventually.
(If you're curious about why these balls are so puny, the full talk is up on YouTube)
Hell yeah. I continue to be LoRA-pilled.
bsky.app/profile/hars...
Funny. Also interesting because the deployment of laser defenses very important to the future of war.
27.02.2026 14:53 β π 7 π 0 π¬ 0 π 0www.gleech.org/ai2025
27.02.2026 10:53 β π 10 π 1 π¬ 0 π 1
New preprint with LΓ©o Pio-Lopez:
www.preprints.org/manuscript/2...
"Multi-Scale Longevity: Defeating Aging from Cells to Embodied Human Minds, and the Future of the Species"
a broader view of longevity research.
Instead of forcing models to hold everything in an active context window, we can use hypernetworks to instantly compile documents and tasks directly into the model's weights. A step towards giving language models durable memory and fast adaptation.
Blog: pub.sakana.ai/doc-to-lora/
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.
https://www.anthropic.com/news/statement-department-of-war
Oh, thought of one more thing: Cardio on a regular basis (~3x/week or as needed) can help with anxiety and get your baseline energy levels up too.
27.02.2026 03:01 β π 2 π 0 π¬ 0 π 0
Things to consider:
- Melatonin on occasion
- Sleep apnea
- Burnout and stress can come from feeling unsupported at work. Taking breaks and rest doesn't really address the core problem.
You got this!
A big trend in the last century is putting a much larger fraction of pop. into research tasks.
This counteracts the diminishing returns from eating the low hanging fruit. Result is smooth linear progress.
The same with AI, we'll all become automators, progress will be steady-ish?
I don't use block lists to pass judgement. I'm sure the people I block have thoughtful and reasonable things to say in general.
That said, I think it's perfectly fine to cultivate ones garden. People should be free to block whoever they want.
I forgot to mention that all of this was inspired by OAI raising issues with SWE-Bench-Verified and it turns out their alternative SWE-Bench-Pro is worse:
www.lesswrong.com/posts/nAMhbz...
LLM's are general purpose priors. You need to teach them. When you find a problem your AI gets slightly wrong, take note!
Structure your work to be automated, iterate, push up the 9's of reliability, move to a new problem. This is the first and final project of humanity.
(9/9)
This will bring RL-as-a-Service to the fore. Experts with the tacit knowledge to figure out how to apply AI to tasks will hold the key to further progress.
(8/9)
splittinginfinity.substack.com/p/rl-as-a-se...
But now we're running out of benchmarks. We have to work harder to find errors. And how do we make progress if we can't differentiate between good and better?
(7/9)
www.sam-rodriques.com/post/the-end...