's Avatar

@nateberkopec.bsky.social

1,363 Followers  |  53 Following  |  547 Posts  |  Joined: 10.11.2024  |  1.8819

Latest posts by nateberkopec.bsky.social on Bluesky

Datadog had no right to have the best browser RUM around, and yet, they have it.

Every time I use the product I'm so happy with it. And then, it's integrated with everything else on the DD platform. Heaven.

28.11.2025 17:00 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I never, EVER accept LLM output without making it run code to verify what it did (tests, something more manual, whatever).

Neither you nor your LLM can run code in their head. Do not trust, always verify.

27.11.2025 16:59 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Preview
Add workers :auto by nateberkopec Β· Pull Request #3827 Β· puma/puma It's probably not that great that you MUST use the ENV var in order to get this neat behavior. I noticed that this rounds down, which means a cpu quota of 512 will potentially put you into ...

github.com/puma/puma/p...

26.11.2025 17:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Letting Puma auto-set your worker count is the easiest way to go for 90% of usecases.

Currently, you can only do that with WEB_CONCURRENCY=auto, but we'll also make this possible in the next puma version by using `workers :auto` in your puma.rb.

26.11.2025 17:02 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1

Currently you can set WEB_CONCURRENCY in env to "auto" and it will do this for you, in future versions you'll be able to use workers :auto in your puma.rb config

25.11.2025 20:17 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you want to use physical core counts instead and pay AWS twice as much, be my guest, but just know that I've been using logical core count to set this for 10 years and it does not increase latency except from ~99% or greater cpu utilization (wowee!)

25.11.2025 17:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Aside: why logical and not physical cores?

In 2014 Intel and AWS decided to sell everyone logical cores as "vCPU". We bought it, and eventually in common use vCPU==CPU. Turns out, they were right, hyperthreading works. 10+ years of production use confirmed it.

25.11.2025 17:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

If you are using Concurrent.physical_processor_count or Concurrent.processor_count to set your Puma/Unicorn worker counts, that is wrong.

Use Concurrent.available_processor_count. It takes into account cpu quotas in envs like k8s/docker.

25.11.2025 17:02 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Tired: Repo github stars
Wired: Repo contributor count

24.11.2025 21:03 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

have you, too, been to so many conferences that you feel nostalgia for this?

24.11.2025 17:01 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
GitHub - nateberkopec/dotfiles: yup we're doin the thing yup we're doin the thing. Contribute to nateberkopec/dotfiles development by creating an account on GitHub.

Not _really_, my public Github is full of Claude-powered work though. My dotfiles contain my Claude.md which is extremely basic github.com/nateberkopec...

22.11.2025 10:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I understand the "it's fun to be competent" and "I can feel the skill draining out of my fingers" people, but like, did you ever really NEED to be an expert on the fucking Datadog Scorecard API?

22.11.2025 10:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

He thinks only of himself. He thinks if he can get any peace over the line, no matter what the terms, it gets him a Nobel.

22.11.2025 10:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This kind of project would've been maybe ~5-8 hours without an LLM. The LLM allowed me to quickly paper over knowledge I didn't really have (Datadog's API, how scorecards work). I basically just provided high level arch and direction and let the rest auto-complete.

21.11.2025 17:02 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Example of the kind of thing I'm doing more of now in the age of LLMs:

A ~1000 line Ruby project for creating Datadog scorecards. A client had ~30+ microservices, but adherence to "platform" standards like "use jemalloc" was spotty. LLM did it in something in ~1hr.

21.11.2025 17:02 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

IME you never trust, only verify. If a task is not verifiable, don't give it to an LLM.

20.11.2025 21:10 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Architecture astronauts create 1+ service layers, which are ~internal views. They end up having to serialize big objects before crossing layers, but only use tiny parts of the object on the other side. Since they didn't use lazy accessors, they waste tons of db queries/work.

20.11.2025 17:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

>merged back

gl

19.11.2025 20:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

go off king

19.11.2025 20:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

FWIW I think what I was referring to on the podcast was more of a Turbo/Hotwire+, something which was a superset of the current API, but maintained 90% compatibility with upstream.

19.11.2025 02:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

If you too have a Rails app and would like to see your page load times improve like this, you should try our retainer service πŸ˜‰

17.11.2025 17:03 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI Powered Private School | Alpha School Discover Alpha School's AI-powered 2 Hour Learning model. Empower your child to excel at academics while developing real world skills and passions.

So how do you prepare a programmer to deal with these edges when you can't really sell them words or content? Maybe it's time to return to cohort-based, in-person or synchronous learning of some kind? Take a page from kids education like Alpha (alpha.school/)?

16.11.2025 21:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

3. Lack of observability. Without the right data and context to feed to the LLM, it doesn't create good answers.
4. Slop outputs. LLMs generate overly verbose and just plain bad solutions.
5. LLMs don't make (good) decisions, they generate a lot of options.

16.11.2025 21:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But in my own work, I notice there's still many rough edges around LLMs:

1. The users don't have enough taste and knowledge to verify the work being done, and ship no-ops or even regressions b/c the LLM said "it's optimized"
2. Unknown unknowns. Users don't know what to prompt.

16.11.2025 21:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

The need for building skills isn't going away. It's more important than ever before. But the need for a "reference manual" of the style of the Complete Guide to Rails Performance from 10 years ago is clearly gone.

16.11.2025 21:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm thinking about the future of the content side of my business. I think where we're headed, intellectual property isn't coming with us.

In a world without copyright, how do I capture value while still teaching them the skill of performance? Building their taste?

16.11.2025 21:53 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

"I will absolutely give my return, but I'm being audited now for two or three years, so I can't do it until the audit is finished"

16.11.2025 05:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm hopeful that the tools with do all this for us in the future at least. It doesn't look insurmountable.

14.11.2025 20:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

4. GPG-signing all commits, with touch (meat finger based) confirmation. This means your agent might do a commit and you go back and amend it to sign it.
5. Running local agents on your "real" machine only in sandbox mode

14.11.2025 16:56 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

After a deep dive on LLM agent security, here's things I think we should all be doing but aren't:

1. Running agents on remote containers only.
2. Doing internet research in a separate cleanroom env
3. Having LLMs read logs daily for signs of exfiltration/promptjacking

14.11.2025 16:56 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

@nateberkopec is following 20 prominent accounts