Hope this signals a further death of the "comply in advance" era
09.03.2026 21:14 — 👍 0 🔁 0 💬 0 📌 0@kaldrenon.bsky.social
Writer, encourager, big soft nerd. Kaldrenon everywhere on the internet, Seven Eagles in FFXIV. Concerningly goobery. Most FFXIV content will be on @lyses-good-boy, still working on setting things up.
Hope this signals a further death of the "comply in advance" era
09.03.2026 21:14 — 👍 0 🔁 0 💬 0 📌 0
One time, on a particularly frustrated job hunt day, I answered a "why do you want this job" style question with: Food and shelter are being held hostage, and I need ransom money.
I didn't get the job, but I DID get a comment on that in my rejection email. 😂
Where do I apply to become the new CEO?
09.03.2026 19:29 — 👍 1 🔁 0 💬 0 📌 0sadly the article says she's still at the company she's just moving to a tech-focused role because she doesn't know how to people (I may be paraphrasing a *bit*)
09.03.2026 19:27 — 👍 1 🔁 0 💬 0 📌 0A little disappointed that she wasn't let go for fundamentally misunderstanding what makes a social media platform good. Not surprised, mind you. Just disappointed.
09.03.2026 19:17 — 👍 0 🔁 0 💬 0 📌 0WAFFLES!
09.03.2026 19:13 — 👍 1 🔁 0 💬 1 📌 0I'm certainly not claiming this happened, but I would *believe it* if someone said Trump ordered a person to take off a show and show him the size tag to prove it was the shoe he ordered for them. It would be in character for him.
09.03.2026 19:12 — 👍 4 🔁 0 💬 0 📌 0
The best case in a lot of situations seems to be "have AI do it then have a person fix the AI's mistakes."
Which at best is less rewarding than doing it manually, and often doesn't even save time or money!
Indeed, this has definitely already happened! I've read stories from multiple disciplines, including law firms, tech companies, anime translation teams, and more where a requirement of using LLMs in their workflow has produced flawed or downright false results to varying degrees.
09.03.2026 19:09 — 👍 1 🔁 0 💬 1 📌 0
me today
(I get more prolific on social media on days when I need to be in the office but I don't have the spoons to do actual work, can you tell?)
Studied the teachings of Jesus.
Studied the teachings of The Church.
When I realized they were mutually exclusive, I picked Jesus. I'm still on his team today and it's why I don't go to church any more.
I want to be very open-minded about the potential good things LLMs could be used for after the current bubble bursts.
But I'm not willing to take any argument seriously if the person trying to persuade me is using ChatGPT.
That creates a challenge for enthusiasts. Have you done the legwork to find tools that are ethical to use? Ones which have not flagrantly violated copyright, which do not constitute an immediate threat to the environment, which are clear-eyed about the limitations of the technology?
09.03.2026 18:20 — 👍 0 🔁 0 💬 1 📌 0I think it's quite reasonable for a person, after seeing all the big name products and what they've done and continue to do, to suspect that there's no ethical option for LLM-driven tools on the market right now. I wouldn't fault them if they didn't dig deeper.
09.03.2026 18:20 — 👍 1 🔁 0 💬 1 📌 0
I also believe that for many critics the objection stems from empirical data about the largest and most well-known tools.
I have no intrinsic objection to LLMs as technology. I DO have an intrinsic objection to using ChatGPT, Gemini, Claude, Copilot, et al.
The ethics and impact of tools matters.
In a functioning government that tweet would legally constitute a formal resignation from elected office. He is officially declaring his unfitness to represent the people of his district in Congress.
09.03.2026 18:01 — 👍 0 🔁 0 💬 0 📌 0
As an analogy, consider: "aspirin doesn't do anything" vs "aspirin can cure any disease"
Both false in a binary sense. But if you're trying to express the scope of impact aspirin is capable of having, "doesn't do anything" is off the mark by far, far less.
It's SUCH a dramatic lie that, while "LLMs can't do anything" is just as false in a binary sense as "LLMs can do everything," claiming that they can't do anything is orders of magnitude *closer* to the truth.
09.03.2026 17:55 — 👍 5 🔁 0 💬 1 📌 0
LLMs are not very good at MOST things. And for many of the things they're good at, we already have very high quality solutions which are more consistent, effective, and sustainable.
But the people who want your money tell you it can do anything you ask it to. It's a much bigger lie than usual.
Like any tool, there is some set of things an LLM can do, and a smaller set of things for which it is the best/most effective tool.
Like any tool, the people who profit from it want you to believe that set is much larger than it actually is.
Where LLMs feel distinct is the sheer size of the gap.
They just want to have fun making things. That so strongly outranks the importance of making *useful* things, or making them *well*, that when a new tool comes along which is objectively less effective they will still embrace it gleefully as long as it is shiny and new and fun to play with.
09.03.2026 16:57 — 👍 4 🔁 1 💬 0 📌 0
As a developer, I have a theory, largely untested:
A common subtype of tech person is one who is far less interested in what it is they're making, and far more interested in playing with interesting toys in the process of making it.
To them, LLMs appeal because it's new and different. That's it.
if we rebuild an economy based on wage growth and not asset appreciation nobody will mind a slightly bigger tax bill
09.03.2026 14:36 — 👍 59 🔁 11 💬 2 📌 0
Absolutely. Once we tax billionaires out of existence, the balance of where taxes come from will have to shift accordingly.
But the ideal is that wage balance also shifts, such that a middle class being taxed at higher rates is not near as much a burden as it would feel today.
Ironically, Claude was used in assaults on Iran and one could use that to *justify* calling Anthropic a threat to national security. It did in fact materially contribute to a reduction in our security as a nation.
But this regime would have to admit to a lot of sins to make that case in court.
I should start doing "time to AI" speedruns, where I start a timer every time I'm looking into a new product or service, and stop the timer as soon as they mention AI "features."
It happens so often. Why must it be so hard to find tools worth using?
First he said Iran was a week away from having a nuclear weapon. They weren't. This one was even too much for turbo dink Ted Cruz. Next it was "we've been at war with Iran for 47 years." We weren't. Then it was "we're liberating the people from a tyrannical government." We're not. Then it was "we're going to take out their leader and replace him with a pro-US guy." We didn't.
writing about the fucking war for my newsletter. subscribe for free so you don't miss it.
09.03.2026 14:48 — 👍 171 🔁 35 💬 4 📌 5Lots of love from me and Steve!
09.03.2026 14:02 — 👍 572 🔁 94 💬 18 📌 3
Yeah, I think that's the biggest challenge - it's not a one-size-fits-all problem but it's often treated as such!
Also, apologies, I meant to add this but forgot: I doubted I was saying things you hadn't already thought about, just reflecting on the parts of the question I find most noteworthy.
My boss was away for the past two weeks and all the meetings I was in started on time.
I'm going to miss that.