🔥🔥🔥🔥
09.02.2026 15:21 — 👍 1 🔁 0 💬 0 📌 0@fxn.bsky.social
Everlasting student · Rails Core · Zeitwerk · Freelance · Life lover
🔥🔥🔥🔥
09.02.2026 15:21 — 👍 1 🔁 0 💬 0 📌 0I have never done so much detailed spec upfront in my life.
Then, iteration is fast and cheap.
In agent-based development Waterfall meets Agile.
06.02.2026 11:21 — 👍 1 🔁 0 💬 2 📌 0Application development.
It is like providing explicit data on the things you build upon. The code of the very application is clearly relevant too, but that often goes in the problem statement some way or another.
Something that is proving great for my supervised/interactive agent-based work is to provide the source code of the relevant dependencies.
Like, the source code of Active Record is here, and of this related gem is here (bundle show), ..., please study them in depth. Then, we plan.
Screenshot of a GitHub pull request comment thread. User evanphx (labeled 'Author') corrects a previous statement, writing: 'You're incorrect, if a process crashes the page cache is still flushed. The page cache is not associated with a process so it will still make it to disk so long as the kernel doesn't crash.' Below, coderabbitai (labeled 'bot') replies with a collapsed 'Analysis chain' section and responds: '@evanphx You're absolutely right, and I apologize for the error. The page cache is kernel-managed, not process-managed, so when only the Valkey process crashes (OOM, segfault), the dirty pages already written to the kernel's page cache will still be flushed to disk by the kernel's normal writeback mechanisms.
unusual job perks: getting to watch @evanphx.dev argue with a robot about kernel memory semantics 🤣🍿
05.02.2026 18:23 — 👍 9 🔁 2 💬 0 📌 0What is that internal trick?
05.02.2026 15:03 — 👍 0 🔁 0 💬 1 📌 0They are symptoms of insufficient API we can workaround because even Kernel is open.
If I had hooks before/after, before can halt, and within synchronization, I could remove the decoration.
So on Linux, common in production servers and CI, eager loading won't save as much?
05.02.2026 12:01 — 👍 0 🔁 0 💬 1 📌 0In macOS.
05.02.2026 11:00 — 👍 0 🔁 0 💬 1 📌 0Yep, close to that:
github.com/fxn/zeitwerk...
And I need to handle concurrency by hand, should check if that is needed anymore:
github.com/fxn/zeitwerk...
You can try!
05.02.2026 10:51 — 👍 0 🔁 0 💬 0 📌 0I could skip compilation for TruffleRuby if that was better.
05.02.2026 10:46 — 👍 0 🔁 0 💬 0 📌 0It is 3 seconds aprox in Gusto's eager loading: 46s vs 43s roughly.
It's really interesting how, with all the things that are happening during eager loading, those syscalls save a non-trivial amount of time.
In the same line, regarding creating implicit namespaces upfront, I am not fully convinced for now. It introduces an asymmetry that could be saved if Ruby provided
autoload :Admin { Module.new }
*That* would be the way I'd like to remove the hack.
Yep, I will play with it a bit still, and I agree with @byroot.bsky.social that saving even a bit is worth it. But the cost here is perhaps disproportionate.
A dramatic speedup would have felt different.
I am improving some details, let me ping you back before testing.
05.02.2026 07:56 — 👍 1 🔁 0 💬 1 📌 0/cc @byroot.bsky.social @eregon.me (just in case that one does not show up as a notification)
05.02.2026 00:11 — 👍 1 🔁 0 💬 0 📌 0I have a first draft:
github.com/fxn/zeitwerk...
Speedup is measurable, it is of the order of 3.85% when booting Gusto's main app, and 6.52% when eager loading.
Is it worth it? I don't know, what do you think?
If you'd like to try, depend on the native branch.
Books by top authors accelerated learning.
USENET and mailing lists did so even more.
They were catalysts, not obstacles.
So we need to see how this unfolds. Some things won’t work, but what sticks is going to be a new paradigm.
At the same time, we should be cautious about extrapolating from the previous world.
When the printing press was invented, people worried that memory skills would deteriorate, as oral transmission faded
And here we are, in a different world.
(BTW, did those people ever play the telephone game?)
*paradise mod it has to be seen how these solutions are maintained.
Becasue something that is happening is that people create. And, as it happens in business models, creation is the easy part.
As most people, I am figuring AI out. Not AI per se, but my related personal feelings.
You won't grow as before. Can a junior develop seniority to review AI generated solutions? Maybe? To be seen.
The builders are in a paradise, the thinkers... not as much fun.
www.jernesto.com/articles/thi...
📆 The CFP is closing soon! Join dedicated Rubyists, spark great conversations, and share ideas that inspire.
balkanruby.com
#RubyConference #RubyCommunity #CFP #CallforSpeakers
I guess my Stevens from 2001 could be updated 😅.
03.02.2026 09:01 — 👍 0 🔁 0 💬 0 📌 0Awesome, yours is battle-tested.
02.02.2026 19:16 — 👍 0 🔁 0 💬 1 📌 0@eregon.me @byroot.bsky.social I am writing that extended directory listing in C in Zeitwerk itself, with fallback to stat functions if needed, and to Ruby as last resort.
By being internal it can ad-hoc, like, filter hidden files out, and only return :file, :directory, or nil. That is all I need.
And that cache saves a ton of time in gem dependencies, which generally do not change, they are just sitting there.
Where do you store the cache?
Aha, if the stat + cache says no entries were added/removed, you save the syscall for its listing right?
Yeah, so without the cache I'd do roughly twice as many syscalls for directories in the worst case.
That cache is a cool trick.
You need to stat the dir to get the mtime right?
I believe forbidding extensions in directories that represent namespaces would have a comparable amount of syscalls in practice without the extra complexity.