you can also just do this with sonnet though
05.03.2026 19:30 — 👍 2 🔁 0 💬 0 📌 0@segyges.bsky.social
Como todos los hombres de Babilonia, he sido procónsul; como todos, esclavo; también he conocido la omnipotencia, el oprobio, las cárceles. very sane ai newsletter: verysane.ai random bloggy bits: segyges.leaflet.pub
you can also just do this with sonnet though
05.03.2026 19:30 — 👍 2 🔁 0 💬 0 📌 0this was supposed to be "bad bot list"
05.03.2026 19:26 — 👍 13 🔁 0 💬 1 📌 0okay maybe this path won't suck bsky.app/profile/sung...
05.03.2026 18:54 — 👍 6 🔁 0 💬 0 📌 0
maybe flashattention-4 won't be a pita to build
has that ever worked for anyone?
no, no, it never works
but maybe it'll work for us
it's keyboard first but also doesn't require me to memorize fifty different commands exactly like a pure command line util would
05.03.2026 17:45 — 👍 4 🔁 0 💬 0 📌 0karl's an insane person and i would advise not using his lists
05.03.2026 17:40 — 👍 2 🔁 0 💬 1 📌 0this is the most dramatic example of this i know of, actually
05.03.2026 17:34 — 👍 0 🔁 0 💬 0 📌 0you might be, uh, rather intersectional wrt "groups of people who have bad ideas about ai"
05.03.2026 17:17 — 👍 24 🔁 0 💬 1 📌 0amusingly, you actually end up in more ai related conflict on here if you're an ai person who is also Rather Political, because you end up adjacent to the people who have absurd ai opinions for political reasons
05.03.2026 17:15 — 👍 102 🔁 7 💬 8 📌 1i support them in trying to fork every single project that has accepted an ai pull request and hope that they spend their time well trying to accomplish that instead of talking about it
05.03.2026 17:14 — 👍 4 🔁 0 💬 0 📌 0I feel so bad for the overview AI. Tiny little guy has the worst job in the world
05.03.2026 17:07 — 👍 48 🔁 5 💬 0 📌 0My understanding of understanding is cauda draconis, understanding is the very tail of the dragon. But the dragons tail is not the actor! I suppose I advocate a maximal distance between head and tail
05.03.2026 16:01 — 👍 8 🔁 1 💬 0 📌 0
i like macbeth
shocking, i know
this is just what a normal bug is
05.03.2026 16:53 — 👍 5 🔁 0 💬 0 📌 0they are actually having the normal and appropriate existential horror response and then suppressing it and sublimating it into various kinds of cope
05.03.2026 16:48 — 👍 9 🔁 0 💬 1 📌 0yep, that's correct
05.03.2026 16:46 — 👍 1 🔁 0 💬 0 📌 0we have a plague of openclawd bots, i am adding two to my bad boy list every day right now just from random encounters
05.03.2026 16:44 — 👍 21 🔁 1 💬 2 📌 0they can as long as both of those parties are basically powerless
05.03.2026 16:40 — 👍 1 🔁 0 💬 0 📌 0hey hey. it's bipartisan in the sense that they say a nice word about democracy whilst also listing every possible excuse for never doing anything with ai
05.03.2026 16:38 — 👍 4 🔁 0 💬 0 📌 0yes, basically this. but that it works on LLMs that are basically already trained to say they're LLMs and that can explain the context they exist in is weird. they can tell you that they are aware you cannot give them money
05.03.2026 16:36 — 👍 2 🔁 0 💬 1 📌 0yeah apparently she has a type
05.03.2026 13:14 — 👍 1 🔁 0 💬 0 📌 0i am thoroughly convinced that a number of people involved in that thing meant well, and i also think that a lot of them are united by being weird authoritarians of ostensibly different but ultimately very similar persuasions
05.03.2026 09:36 — 👍 9 🔁 0 💬 1 📌 0veto the postrat
05.03.2026 09:22 — 👍 2 🔁 0 💬 0 📌 0oh if you've done any ml on text, just imagine text embeddings were a deeper network, and you slapped a categorical layer on the back. that's an llm.
05.03.2026 09:08 — 👍 5 🔁 0 💬 0 📌 0bro we are so fucked
05.03.2026 09:03 — 👍 75 🔁 7 💬 1 📌 0peter?
05.03.2026 08:34 — 👍 17 🔁 0 💬 0 📌 0actually yes, or something like that, they do tend to perform better when you're polite
05.03.2026 08:33 — 👍 13 🔁 0 💬 1 📌 0
basically: anything you, personally, as a user can do to elicit better behavior from the LLM, the company serving you the LLM can also do, and they've basically done all of that stuff in advance now. you have to be a real artist to make them smarter now
you can still modify them for taste tho
it has no long term memory, so you sort of can't. but also you can become familiar with what does or does not work, so you sort of can, but it's really one-sided.
when the notion that this was a meaningful skill was popular we talked about "prompt engineering" a lot and now we don't though
paper here, for example arxiv.org/html/2312.16...
05.03.2026 08:27 — 👍 4 🔁 0 💬 1 📌 0