's Avatar

@cutterferry.bsky.social

Interested in mathematics, computer science, history, political thought, and the (lack of) "ethics" of AI. Anonymous because questioning AI is career limiting behaviour.

63 Followers  |  457 Following  |  360 Posts  |  Joined: 19.08.2024  |  1.9332

Latest posts by cutterferry.bsky.social on Bluesky

10.02.2026 16:02 β€” πŸ‘ 215    πŸ” 69    πŸ’¬ 3    πŸ“Œ 1

Similarly, can’t get your AI to get something useful done before context degradation kicks in?

Use a swarm of agents that externalise context and more agents to manage them!

Want to improve the rigour of a problem solving response? Set up adversarial critics!

Etc etc.

I have some theories…

10.02.2026 21:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In passing though it’s interesting how the solution to a lot of problems in AI is more AI. (Neutral observation!)

eg one of your better arguments is that we can safely sail the roiling sea of agent generated code by… asking other agents to explain it to us. (Notwithstanding my mixed experiences.)

10.02.2026 21:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this case, I don’t think adding to the uncertainty under our feet by adding a kind of stochastic compiler (…and moving to pretty informal language…) feels like only making things worse to me.

10.02.2026 21:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes I agree. Good code design is inter alia a conversation with the future, or rather with possible futures, and that on its own is a source of uncertainty.

I think we probably agree about a lot, I’m just profoundly more pessimistic about AI, and not because I don’t think it β€œworks”…

10.02.2026 21:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There’s an odd conflation between β€œbad-as-in-harmfulβ€œ and β€œbad-as-in-ineffectiveβ€œ that goes on. (And symmetrically β€œgood-as-in-useful” and β€œgood-as-in-morally-acceptable”).

Internalised neoliberal capitalism or something. Still surprises me and leaves me feeling quite lonely.

10.02.2026 20:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But who oh who will engineer the privacy of the privacy engineera??

10.02.2026 18:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Well you have "privacy engineer" in your profile...!

10.02.2026 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Also: no-one can keep up any more? Maybe they just need to hire more privacy engineers ;-)

10.02.2026 11:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Hot take: the rise of AI bots and manipulation is actually going to make the investment in identity verification look like a retrospectively good investment to a lot of these platforms, or at least those who wish to try to remain human-based.

10.02.2026 10:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

True on both counts in my experience. Unfortunately the same factors that made it bad code made that kind of extraction into an empirical harness impractical too.

10.02.2026 02:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

So AI was demonstrably wrong and corrected it produced the usual obsequious failure to really get it so I gave up.

10.02.2026 02:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But alas in this timeline the code is batshit and the question isn’t really well-posed since every invariant is a conversation with a hidden space of possibility and god only knows what eldritch nightmares its authors thought they were fighting because they certainly didn’t comment.

10.02.2026 02:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

so tbf was pretty terrible code and long and I could tell I was going to need a pen and paper I lacked …

... so my prompt was probably equivalent to ”wtf does this function do” plus a bunch of ctx.

AI came back with a perfectly reasonable explanation of what it *ought* to do given name and input.

10.02.2026 02:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But the ability to think in general, and the ability to do computer science more particularly, were comparatively democratically and broadly distributed. We’re doing away with that.

10.02.2026 01:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

To coin a phrase, capital controls the means of production, ie. inference, silicon, cooling and weight vectors β€” and they will exploit that ruthlessly.

Was the snobbery and arrogance of the pre LLM IT-talented a good thing? No of course not a bunch pricks we were in the aggregate.

10.02.2026 01:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(*Why* does the benefit accrue to the owners of the machine? Because you’re not competitive without it, and the job is deskilled so vastly more people could do it β€” which is *precisely the point* as indeed the AI execs keep breathlessly reminding us.)

10.02.2026 01:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Retraining to try to crack that wider market is not going to be easy for senior people for a bunch of reasons, incl ageism again.

And as AI will be unemploying Ks of workers with the benefit accruing to the owners of the machine, *all* workers are going to be having a hard time getting good Pay.

10.02.2026 01:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Which leads to the other great bit of copium going around: that AI will create new jobs. That’s simply not necessarily meaningfully true. There will be _some_ new jobs In AI but not necessarily that many. The rest will have to be absorbed into the wider jobs market.

10.02.2026 01:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And tbh β€œoh well we’ll just do the higher level/more PM-ey stuff now” is pure self-delusion. Firstly bc mgt and research clearly aim to automate that too. Secondly because as we drop the marginal cost of tech to nil, there’s simply no reason to suppose demand will expand enough for *that* many PMs.

10.02.2026 01:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(To return to a running example, it’s even harder to persuade mgt now that β€œa higher level replacement to CRUD would be good”. It’s the AI’s problem. Even if it might actually help the AI, whole swathes of skill are now being sidelined.)

10.02.2026 01:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So I’m glad some people who apparently always hated the discipline of coding are having fun.

How much fun do you think my engineer is having?

And the stuff about β€œoh well we just work at a higher level now” doesn’t save them: there are swathes of skills just being junked now.

10.02.2026 01:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

… and are now being told their skillset is useless.

The native ageism of the industry imho prevents it from thinking seriously about the older engineer who used to thrive on great DSL design and is now being told her/his skillset is useless and is wondering how she will feed her kids in future.

10.02.2026 01:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

… a bunch of folks too liked things like domain theory and now we’re basically tossing all those avenues out because turns out we can automate code monkeys so weβ€˜ll just throw scale at the problem.

And a bunch of folks spent a long time earning peanuts to get a very particular set of skills…

10.02.2026 01:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Iβ€˜m well aware that some ppl are enjoying this. And believe me I can see the appeal.

But doesnβ€˜t it occur to you there are people who are not?

A bunch of folks are β€œglad that code is over”. Well maybe they never liked it but lots of folks thought it was interesting as formal notation…

10.02.2026 01:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Imho that’s not quite right. That may be where the conversation about new software starts but it’s not where it ends.

10.02.2026 01:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Honestly the frontier LLM I use is totally hopeless at code comprehension (even when good at generation).

Perhaps it understands its *own* code but…

10.02.2026 01:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The β€œslave mind” model of AI simy doesn’t fit this. Nor, indeed, does the β€œegoless and theory-of-mind-less” raving robot of the LLM really seem plausible to me as something on which to build a kind of social and epistemic agency.

10.02.2026 01:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AIs as currently implemented are simply denied the *social* status and roles that would allow this kind of level of epistemic agency. No reason in principle we couldn’t fix this, plenty itβ€˜s hard in practice β€” and not just technical ones.

10.02.2026 01:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

An engineer rather has epistemic agency: they engage with the problem, discover the driving concerns underlying the requirements, generate, store, evangelise and document new, more precise and formal knowledge. Notwithstanding mgt fantasies about buses, *they* were part of the outcome too.

10.02.2026 01:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

@cutterferry is following 20 prominent accounts