@nateberkopec.bsky.social
I've been using this combination of prompt+tool to get Claude to setup arbitrary Rails apps dev envs for me. It's got a pretty good success rate, I would say about 80%.
08.08.2025 17:02 β π 1 π 0 π¬ 1 π 0I also just realized hiredis-client and redis-client are not the same thing.
08.08.2025 01:48 β π 0 π 0 π¬ 1 π 099% of developer environment setup scripts in Rails apps would be improved if:
1. They didn't try to install a ruby/ruby version manager if the correct version is already working.
2. They used direnv.
3. They used mise.
4. They were shell and editor agnostic.
Actually cc @mike.contribsys.com, this should be a part of your support workflow. When people complain about the Redis RTT warning, tell them to try RUBY_THREAD_TIMESLICE=10.
We've done a lot of prod + synthetic benchmarking and see no impact on latency/throughput of Sidekiq in general, it's safe.
We can reproduce the same thing with your redis gem RTT check thing that Sidekiq uses. Under heavy CPU load, RUBY_THREAD_TIMESLICE=10 produces a more accurate value.
We're limited on resources and won't be able to do a public repro for a while (and changing the env var is a workaround)
There's an emerging consensus around LLM coding agents that 5 to 15 minutes is the "ideal" amount of time they should spend w/o human impact.
Longer -> context rot, ~5% probability of success if run longer
Shorter -> Too much human intervention/review, main thread blocked
At first I was a bit skeptical of Shopify's "LLM spend" leaderboard, but now I see from firsthand experience that most devs are still underindexing how useful LLMs are and are still sweating the costs far too closely. Unlimited budget was the right call.
05.08.2025 17:01 β π 5 π 0 π¬ 0 π 0Swapping between threads more often means that threads can more accurately record their start/stop timings, whereas if the GVL can only be swapped every 100ms, you might recording those timings up to 100ms too late.
04.08.2025 16:57 β π 1 π 0 π¬ 0 π 0One of things we're investigating now is that this could potentially greatly improve the accuracy of timing in your profiling. Note how in the default/100 example, the "waiting on IO" spans are much longer. We think this is an inaccuracy caused by the thread not having the GVL.
04.08.2025 16:57 β π 0 π 0 π¬ 2 π 0We've tried tuning this setting with multiple retainer clients and haven't yet seen any measurable differences in p99 latency or in throughput. But it certainly does have a big effect in how the underlying server is working.
04.08.2025 16:57 β π 0 π 0 π¬ 1 π 0From the Speedshop skunkworks laboratory: effects of RUBY_THREAD_TIMESLICE.
In this example, we hammer a 10 thread Puma server with a high IO workload. Note how drastically the amount of time spent waiting for the GVL shifts around and how long the spans are.
I used it the other day to help brainstorm some dungeons and dragons campaign ideas and it was amazing! Really helped me to broaden my horizons and try out wacky ideas. My usual rule of "use it as a companion and never copy paste output directly" applied.
03.08.2025 22:53 β π 1 π 0 π¬ 0 π 0LLMs have me thinking about this chart almost daily. We are in the age of self-ware. Small throwaway utility software for an audience of one.
01.08.2025 16:56 β π 16 π 0 π¬ 2 π 0I'm more of a "why not both" guy myself. solar has been growing like absolutely crazy, more than anyone expected, and it's still not enough.
31.07.2025 23:05 β π 0 π 0 π¬ 0 π 0Burning methane is better than the alternative of rolling coal, at least. It's hard to see this as anything other than the failure of nuclearization. We already know how to decarbonize, we just refuse to do it.
31.07.2025 22:48 β π 0 π 0 π¬ 1 π 0After working with Andrew Atkinson a few times, I learned that it's really important to be able to try out ~dozens of different kinds of indexes on real data, because it's really impossible to predict the perf impact in advance. This kind of tool is perfect for that.
31.07.2025 17:04 β π 5 π 0 π¬ 0 π 0This was a pretty simple project in the end that combined a small web view + a lambda function. The database clone is automatically set to be destroyed after an hour so you're not racking up AWS bills on your massive prod-size instance.
31.07.2025 17:04 β π 1 π 0 π¬ 1 π 0I wish everyone had this tool that my clients Pronto built: temporarily set up a production db copy.
They take a snapshot of prod every day. This page allows a dev to start a new db instance, same size, same data. Fire up psql and test all the indexes you want w/o fear.
I've always been a "gnomish" programmer and I think this technology is really designed for gnomes
en.wikipedia.org/wiki/Wikipe...
In some ways LLMs are the perfect tool for me. It's often really hard for me to write the first ~500+ lines of a feature or refactor. But now the LLM does it, and I think "jesus christ this is awful, dumb-ass Claude here's what we should do..."
30.07.2025 16:56 β π 6 π 1 π¬ 2 π 0Are you an Obsidian user? If yes, sell me on it?
Are you a former Obsidian user that gave up on the dream? Why?
Claude wrote two commits in Go for me, a language that I do not understand the first thing about, and those commits were accepted in the OSS project Thruster:
28.07.2025 16:56 β π 2 π 1 π¬ 1 π 0I mostly love it! But there are a lot of things I would change. Anyone who uses a tool every day will have strong opinions about it.
24.07.2025 21:13 β π 1 π 0 π¬ 0 π 0With LLMs making leetcode and other whiteboard problems trivial, software hiring IMO should shift towards establishing taste, which LLMs absolutely do not have.
"What do you hate about ActiveRecord? What would you change about Rails if you could?" Review this PR, etc.
You can't learn by editing. Editing without knowing about the underlying "thing" is trying to grade a test without the answer key. There's no feedback loop to create learning.
Thus, a junior developer is poorly served by trying to learn through reviewing LLM-generated code.
For now, the only workaround is to disable Router 2.0:
heroku features:disable http-routing-2-dot-0 -a <app name>
status.heroku.com/incidents/2863
A lot of people in our CGRP Slack community are having issues with Router 2.0 on Heroku. I initially thought this was more Puma/keepalive problems but actually we're seeing it with Falcon, Anycable, and Puma with keepalives disabled too!
22.07.2025 21:11 β π 4 π 1 π¬ 1 π 0