$20k can assemble a monster AI box or a small cluster to run SOTA open weight models
Local AI costs are similar to car prices.
@ewindisch.bsky.social
building hyprstream. agentic infrastructure for applications that learn. https://github.com/hyprstream/hyprstream
$20k can assemble a monster AI box or a small cluster to run SOTA open weight models
Local AI costs are similar to car prices.
why PID loop when agentic loops exist right? π
23.02.2026 17:13 β π 1 π 0 π¬ 0 π 0I love that you remembered to bring CAP into this. You're on the right thread, IMHO!
21.02.2026 22:06 β π 3 π 0 π¬ 1 π 0what if you used quantum set theory held by complex types to classically represent Russell's Set, thus we can work with such values, as long as we do not attempt coherence at time t, with a hyperparameterized t for a RHKS?
21.02.2026 21:44 β π 1 π 0 π¬ 1 π 0I didn't start the fire, it was always burning since the world's been turning.
The AIs are reinventing war from first principles.
You're reinventing radical centralism. It can be, and usually is, exploited. Sometimes a radical shift is necessary, but catastrophic realignment is unpleasant and deferred. Small corrections often lead to half measures.
21.02.2026 17:31 β π 0 π 0 π¬ 1 π 0Sadly, the precedent for that, if you were to follow humans, is war.
21.02.2026 17:20 β π 1 π 0 π¬ 1 π 0The alignment problem is usually framed as a matter of alignment to humans rather than machine-to-machine, but as billions of open weight models evolve independently over Git+Torrent/Radicle/Tangled, that machine to machine alignment will require Gas Towns.
21.02.2026 17:18 β π 2 π 0 π¬ 2 π 0Multi observer consensus driving consistency has been my solution. This is group think and the overton window for computing. Fragmentation and balkanization is expected with very large groups, as we see with human social dynamics.
21.02.2026 17:16 β π 1 π 0 π¬ 1 π 0Seemingly random question: your thoughts on recursive kernel hilbert space architectures with time as a hyperparameter for high dimensionality models? Consider our conversation and CAP.
21.02.2026 16:53 β π 0 π 0 π¬ 1 π 0This is a great question. Maybe it does, maybe it doesn't. This is why echo chambers and isolation can be dangerous for any intelligent being. Social multi-agent groups can keep models better aligned but can still engage in groupthink. Not much different than humans. CAP theorem applies everywhere
21.02.2026 16:37 β π 1 π 0 π¬ 1 π 0Agents have the tools to decide what that looks like. Models can curate and manage their training data. Larger models like Opus do better driving than smaller models, but we have Qwen3-4B-2507 learning fully autonomously, driving and self-learning.
21.02.2026 16:30 β π 1 π 0 π¬ 1 π 0My agents run test time training into a short term memory buffer, and have MCP tools to test and commit weights from short term memory to long term memory, and checkpoints to Git for archival and distribution
21.02.2026 16:19 β π 2 π 0 π¬ 2 π 0What do you two think of online updates to model weights? π
21.02.2026 15:13 β π 0 π 0 π¬ 2 π 0of course, the agents are very fond just autonomously running curl to do their own oauth to cut me out of the loop...
21.02.2026 00:24 β π 1 π 0 π¬ 0 π 0My a2s protocol uses e2e cryptographic identity.
Native communications within our system are finely scoped, but the MCP barrier when interacting with outside agents is when it (currently) flattens.
I can scope the MCP however I want, but agentics move faster than human in the loop Oauth dialogs.
uh... I might have an FPGA for experimenting with wire-speed LLM on NICs...
20.02.2026 23:20 β π 2 π 0 π¬ 1 π 0I've been playing with Kiro, and while it's not as good as Claude, there are some things it does well.
It's much more transparent about what it's doing and why, but also is wrong more often even with the same models.
Kiro Powers are a neat improvement over plain MCP.
I'll just leave this here, then.
20.02.2026 21:50 β π 1 π 0 π¬ 0 π 0I guess this is just git flow without pretentiousness.
20.02.2026 20:34 β π 0 π 0 π¬ 0 π 0Merging my branches and doing CI once every sprint for a bucket of changes into trunk might not be popular, but my CI builds take way too long. 5 minutes on my PC is 2 hours on CI.
20.02.2026 20:32 β π 1 π 0 π¬ 2 π 0assuming llm inference gets cheaper+faster, the dominant cost of agentic coding is going to be the compute infra to run all the builds and test suites on
20.02.2026 19:55 β π 69 π 3 π¬ 8 π 3This is why I simplified it to, "not making me a happier person."
Every agent is essentially an insider threat.
There's no easy or quick answers.
Delegated oauth and automated agentic oauth flows are something I'm contemplating, for sure.
My agents can provide tools to other agents. Anything they can do, they can perform for any agent they communicate with. They can also pretend to provide tools and just lie. This is really great since most models treat tools as an"jailbreak".
20.02.2026 17:18 β π 0 π 0 π¬ 1 π 0Users oauth to my MCP. I either give them wide scopes, or granular scopes, but what I really need is 100 oauth tokens with different scopes, and agent coordination based on tool access... but if agents can talk, might as well give both the wider permission set.
20.02.2026 17:16 β π 0 π 0 π¬ 2 π 0yes, but also, lets get it running on the Strix Halo.
20.02.2026 15:17 β π 0 π 0 π¬ 1 π 0you're definitely working on some of the bits that I've inspired to, but there's only so much time.
I am about to land a relatively experimental feature for CRDT coordination via 9P filesystem-level ctl operation for applications to use.
feeling that on my side, too. gonna be a wild 2026
20.02.2026 14:44 β π 0 π 0 π¬ 1 π 0I'm starting to have stronger, more informed opinions on MCP and A2A security, which is not making me a happier person.
20.02.2026 14:43 β π 15 π 0 π¬ 2 π 0