If you enjoyed this thread:
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
@itsalexzajac.bsky.social
If you enjoyed this thread:
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
What did I miss?
08.10.2025 15:31 β π 0 π 0 π¬ 1 π 0Pick the one that matches your constraints, and go build things!
08.10.2025 15:31 β π 0 π 0 π¬ 1 π 0Neither is "better."
Both are tools.
GraphQL works when clients need flexibility and you can handle the backend complexity.
08.10.2025 15:31 β π 0 π 0 π¬ 1 π 0REST works when your API serves many clients with similar needs.
08.10.2025 15:31 β π 0 π 0 π¬ 1 π 0Query depth can spiral.
Rate limiting gets messy.
Complexity moves to the client.
Caching becomes your problem.
The cost of this:
08.10.2025 15:31 β π 0 π 0 π¬ 1 π 0You grab exactly what you need.
One plate. One trip. One endpoint.
No overfetching. No wasted bandwidth.
GraphQL is a buffet.
08.10.2025 15:31 β π 1 π 0 π¬ 1 π 0Multiple endpoints. Standard HTTP methods. Predictable, simple, cacheable.
But you're stuck with what the kitchen sends.
You order the salmon, you get the salmon.
Plus rice. Plus vegetables. Plus garnish.
Even if you only wanted the fish.
That's overfetching.
REST is a menu.
08.10.2025 15:31 β π 1 π 0 π¬ 1 π 0They're solving the wrong problem.
The real question isn't which is better.
It's which solves YOUR problem.
Most developers waste hours debating REST vs GraphQL...
08.10.2025 15:31 β π 0 π 0 π¬ 1 π 0If you enjoyed this thread:
1. Follow me @itsalexzajac for more content on Software and AI
2. Check out Hungry Minds: newsletter.hungryminds.dev
3. RT the tweet below to share this thread with your audience
What do you think?
07.10.2025 15:32 β π 0 π 0 π¬ 1 π 0Don't choose between traditional ML and LLMs.
Combine them strategically.
Use classical methods for filtering.
Use LLMs for nuanced ranking.
This is 10,000x cheaper than the naive LLM approach.
What can we learn from this?
Results:
β LLM-as-judge validated offline quality
β Order penetration improved in production
β Hybrid approach unlocked LLM benefits without scale traps
4. Personalized scoring and online integration
Combine LLM rankings with real-time signals (location, time, preferences). Deploy with standard production infrastructure.
3. Targeted LLM mapping
Now the LLM only ranks 200 items, not millions. It maps filtered candidates to user preferences with full context and nuance.
2. RAG retrieval
Use vector similarity to pull ~200 relevant candidates from millions of items. Fast, cheap, and surprisingly effective at filtering.
1. Tagset grouping
Cluster similar tagsets together offline. This creates semantic neighborhoods that reduce the search space dramatically.
0. Data cleaning and tagging
Turn messy merchant data into clean, structured tags. Think "vegan options" or "quick delivery" instead of raw descriptions.
Most teams would either abandon LLMs or burn cash.
DoorDash found a third path.
They built a hybrid system:
β Context bloat with millions of items
β 200M+ users = seven-figure monthly costs
β Raw text embeddings miss critical structure
Cold start recommendations seem perfect for LLMs.
But naive approaches fail at scale:
Recommendation systems with LLMs?
DoorDash's architecture:
What are you reading this week?
06.10.2025 15:32 β π 0 π 0 π¬ 0 π 0