Yeah, you still need pointer fields to distinguish between a missing value and a value explicitly set to the zero value.
05.08.2025 21:18 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@rednafi.com.bsky.social
Software Engineer @Doordash/Wolt Writing rednafi.com Recurring themes: Go, Python, distsys, schema-driven development, eventual consistency, resilience patterns, HA, data access strategies, observability, SRE practices, and sci-fis.
Yeah, you still need pointer fields to distinguish between a missing value and a value explicitly set to the zero value.
05.08.2025 21:18 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0TLDR;
Start with PG. Move to Aurora later on if you need to. Use your DB as a dumb storage backend. The fancy query patterns where the optimizer need to do a ton of work doesn't scale.
active-active multi region dbs should be your last resort. They are hard to operate and build on (n/n)
Being aware of this full circle is better than blindly saying: "those column family dbs are terrible and you shoud always use PG"
There's a reason why people moved from monolith to SoA and it's just not FAANG trying to sell you something (6/n)
At certain scale no matter what you do, you'll need an active-active db where every cluster can accept mutation & not just one in a single region.
But active-active dbs are AP and not CP like postgres/aurora. So it's a different tradeoff. And boy is it hard to develop your app with these dbs (5/n)
The optimizer in your relational db is doing a ton of work and it doesn't scale smoothly when you have multi region dbs.
So all these talks about nosql dbs aren't just db vendors trying to sell you something. It's easy to say so when you haven't seen relational dbs croak & break under load (4/n)
But it's good to keep in mind that the underlying storage is still shared in Aurora and there's only so much you can do with it.
Also, failover time for aurora has been a deal breaker in some instances. But that's the price you pay for consistency (3/n)
This mostly doesn't matter for startups. However, PG evangelists tend to overestimate the load at which it fails. This is my third workplace where I'm seeing PG failing earlier than "expected".
Aurora is amazing & I regard the auora white paper in the same position as the dynamo paper (2/n)
Stay in the Postgres lane as long as you can. Then move to Aurora if you need HA.
The transition should be smooth as long as you don't use all the power of PG. Complex queries & fancy features don't scale. In fact, they fail spectacularly when the workloads ramps up (1/n)
At Wolt, we have a position called "Competence Lead," where the person acts as an internal DevRel.
They champion certain tech, share practices that work for us, work w/ recruiters, attend confs, etc.
This resulted in some public facing content. But yeah, SV companies lead the way in this scene.
Polishing internal knowledge to share it with a broader audience takes a ton of work.
Most companies don't make it a priority. The ones that do it exceptionally well often include publishing engineering blog posts in the performance review process; similar to interviewing candidates.
Itโs generally a good idea to keep the responsibility of calling external svcs separate from the core logic.
Martin Fowler calls it a Gateway, though he mainly explores it in OO context. I wanted to showcase how to do it in Go by placing interfaces in the right places.
rednafi.com/go/gateway_p...
While itโs annoying when companies go <insert latest fad>-first, the alternative is often turning into IBM.
Sticking with the fundamentals might please a few purists and cynics like myself, but itโs the wrong move when the entire platform is shifting.
We just need to wait out this slop gen era.
I still prefer NVIDIA ones. Thereโs just so many little things that donโt work well with AMD graphics cards. The proprietary driver works well.
30.07.2025 21:08 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Neat timing. I wrote a rant against DI library that hit the front page of the orange site a few weeks back :D
rednafi.com/go/di_framew...
A great set of advice to generate metrics from your app. The idea is that you donโt need to count both error & success rate in prometheus. Just tracking errors will cover the binary success/error cases just fine. Use labels when there are multiple dimensions.
promlabs.com/blog/2023/09...
Yeah, OTEL is a complex beast. I almost always start with stdout log and prometheus for metrics and see how long I can get away with it.
28.07.2025 17:52 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Problem is, in a real production environment, you rarely depend on just metics.
Almost always, we need traces correlated with metrics, correlated with logs.
What would I do with the money if I canโt even have the time to spend it? What a bleak glimpse into the future.
24.07.2025 21:25 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0both traces and metrics are an abstraction built on top of logs that pre-process and encode information along two orthogonal axes, one being request centric, the other being system centric
Traces, defined as DAGs of spans, and metrics, a collection of numbers that measure system behavior, are both somewhat vague as observability signal definitions. The clearest explanation Iโve found so far comes from a 2017 presentation by Cindy Sridharan.
23.07.2025 22:56 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0How often do you see new grads asking qs about these stuff? Way less than it's dev cycle counterpart.
There are zillions of vendor in this space promoting zillions of ways of doing things. At least Otel is trying to bring some order to the chaos. We need way more discussions around o11y patterns.
The answers to these questions are vague and we still don't have shared vocabulary like DDD, TDD, Test Pyramid, or SOLID in this avenue.
How often do you see listicles like "here's how to do list comp in Python" but for operation, maintenance, and o11y stuff.
There are rough patterns that you can read about here and there but questions like
- o11y is cross cutting; where to put all those otel statements in my codebase
- how to prevent o11y stuff from obfuscating my core logic
- how much instrumentation is too much
- angle of attack to handle incidents
Operational practices, o11y, SRE stuff get a lot less focus. Sure, there are north star books like the one from Google, Observability Engineering by Charity major, but on the implementation side of things, we still don't have a shared understanding of things like we do about TDD or Hexagonal Arch.
21.07.2025 10:40 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Broadly speaking, the lifecycle of SWE can be divided into two streams: development & maintenance cycle.
Most of the fuss about TDD, DDD, Design Patterns only focus on the former while in reality, we spend a ton of time maintaining & debugging existing stuff which falls on the latter segment.
Me too. Itโs just sad to see โlabor exploitation in a trench coatโ is being sold as the future and some folks just can shut up about it.
20.07.2025 23:18 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0โWhat a bleak futureโ
20.07.2025 22:12 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0There are tons of interesting work being pushed out by people all the times but llm yapper have successfully drowned out everything else.
So Iโve tuned out of the whole thing for a while. I still use these tools on my own pace but mostly donโt pay attention to the grifters. Itโs peaceful here.
So itโs a bit hard for me to get overly excited about the power of these tools.
They are great and does work quite well at times, but I am also tired of people inventing phrases for every LLM artifacts: llms fart and itโs stochastic wind release, llms spew slightly coherent sentences and it AGI.
Itโs great if someone is into that kind of thing. But I am yet to extract anything valuable from these discussions for the kind of work I do.
I do use LLMs but I work w/ dist systems & large HA databases. I donโt write throwaway CLI or tiny green field web tools where LLMs shines.
Then I traced back their history and found most of them always rambled about the most relevant topic at the time.
Itโs clout driven development and some folks have made a career out of it by writing blogs, appearing on podcast, and always being at the butt off every discussion.