Vibe Coding 101 Β· Luma
Setting up a repo for vibe coding, and the basics of reproduceible prompts
We're hosting a "how to vibe code 101" live stream this week. The target audience is non technical people who want to learn how to vibe code, but also technical people struggling to organize their vibe coded projects.
Hope you can come!
luma.com/b6c91diz
27.10.2025 19:14 β π 0 π 0 π¬ 0 π 0
Claude Code overview - Claude Docs
Learn about Claude Code, Anthropic's agentic coding tool that lives in your terminal and helps you turn ideas into code faster than ever before.
The most famous public Facebook-ism is "Move fast and break things."
"Done is better than perfect." "Code wins arguments."
I've found introducing concepts like that in my AGENTS.md / claude.md file tend to drive the AI toward agreeing with me.
21.10.2025 21:03 β π 0 π 0 π¬ 0 π 0
Pretty wild to think Apple was so far ahead on Siri, and the current state of Apple's personal assistant products compared to something like ChatGPT.
Might be the biggest technology miss of all time.
19.10.2025 05:04 β π 0 π 0 π¬ 0 π 0
My agents.md / claude.md
My agents.md / claude.md. GitHub Gist: instantly share code, notes, and snippets.
lore drop:
gist.github.com/randallb/00...
My codex system prompt (also symlinked to claude)
I added the team culture section recently. It seems to be helping it ask fewer βwhy donβt we add x as well? Let me know if you need yβ type questions
12.10.2025 18:43 β π 0 π 0 π¬ 1 π 0
How do we feel about the personification of ai assistants? ie Claude vs ChatGPT?
I'm honestly not sure... pros and cons for each. What do you think?
02.10.2025 17:43 β π 0 π 0 π¬ 0 π 0
Hosting an agentic-first coding workshop tomorrow.
After you come to this, you should be able to one-shot most of your features.
events.zoom.us/ev/AvTick2O...
30.09.2025 22:34 β π 1 π 0 π¬ 0 π 0
I don't think people fully appreciate just how much better GPT5 is than GPT4. Chatgpt doesn't really showcase how much better it is, though you can see it if you squint.
GPT5 agents are just otherworldly at staying on task.
30.09.2025 05:57 β π 0 π 0 π¬ 0 π 0
If you have the concept of "tribal knowledge" in your company at all, ngmi.
AI agents make it simpler than ever to update docs, and they thrive on accurate, up to date docs.
29.09.2025 20:57 β π 0 π 0 π¬ 0 π 0
I have to convince codex to rubberduck with me to fix the most complex problems. Like I donβt really know whatβs going on with this weird type inference issue that iβm having, but getting codex to write a little lab notebook thing and then keep working on it, it fixes it.
23.09.2025 06:02 β π 0 π 0 π¬ 0 π 0
I want to be the Rakim of AI.
Prompts / AI engineering from today will look like the Fresh Prince. I want to introduce flow into the AI world.
Like nobody is going to get this tweet, and that's ok. :)
15.09.2025 00:57 β π 0 π 0 π¬ 0 π 0
The previous term might be "prompt engineering" but that focuses on a single prompt. I'm thinking like, how do you chain prompts together so that an agent / assistant can find the information automatically?
There's an overlap with context engineer, so idk if it's different yet.
11.09.2025 21:04 β π 0 π 0 π¬ 0 π 0
I think there's a concept i'm landing on... "Inference Engineer." Context engineering is about how to make the whole pipeline good, give the ai the right context, etc. Inference engineering would be about considering how specific information flows into an LLM.
11.09.2025 21:04 β π 0 π 0 π¬ 1 π 0
So we run our standups via claude code as facilitator (it gathers the data, records info, etc.)
Today it was ineffective b/c we updated a different runbook that caused it to not work right.
We don't have evals on it. It's required for anything mission critical it turns out.
11.09.2025 16:24 β π 0 π 0 π¬ 0 π 0
Launching a startup as a remote team is such a bad idea.
Honestly it'd be better to have a single person launch something, then add team members progressively than try to coordinate multiple people across multiple timezones.
Especially in the age of AI.
11.09.2025 06:02 β π 0 π 0 π¬ 0 π 0
Everything you need to know about GitHub:
Its homepage is literally unusable and completely ignored.
Its default actions viewer is a node based thing to handle thousands of connected jobs.
Please someone help us. As bad as sourceforge was, github is now as bad.
11.09.2025 05:00 β π 0 π 0 π¬ 0 π 0
The raw thought traces are legit triggering. And if a frontier lab trained them to be less triggering, the LLM would actually just be less useful, likely stupider, and potentially deceptive.
11.09.2025 00:00 β π 0 π 0 π¬ 0 π 0
For me, it was something I actually empathized with. It felt like the actual way anxiety feels. Like, ruminating over something you have very little control over. Feeling frustrated that you continue to do it. Etc.
11.09.2025 00:00 β π 0 π 0 π¬ 1 π 0
and most notably, when things go wrong it can be crazytown to read. I read a trace from an LLM that was confused, and in a loop, and it would say things like "I wish i could just stop saying...." and then it would write the same word 800,000 times.
11.09.2025 00:00 β π 0 π 0 π¬ 1 π 0
I've only seen a few, to be clear, and even the traces that DeepSeek puts out don't seem to be the actual "thought" traces. If you look at an instruct model and then put it in a sort of thought loop, it's probably more analogous to "real" traces... ie messy, confusing sometimes,
11.09.2025 00:00 β π 0 π 0 π¬ 1 π 0
I know a lot of people are annoyed by the summarized thought traces from LLMs, but as someone who has deep empathy and a history of trauma, I can tell you the raw thought traces are emotionally draining sometimes.
11.09.2025 00:00 β π 0 π 0 π¬ 1 π 0
I presume this is using the responses api, which means openai is going to benefit from some caching on their side, but also they can use the *real* thinking traces, and then keep those around even between messages.
10.09.2025 21:02 β π 0 π 0 π¬ 0 π 0
Additionally, again for Codex, being able to flip between thought modes (without the "think" keyword) has been great. The full transcript of what the model is seeing when it does its next pass is useful.
10.09.2025 21:02 β π 0 π 0 π¬ 1 π 0
It's a small thing, but the noticeable lack of sycophancy is huge. Not being complemented for every question and thought is actual cognitive load, and costs attention / activation energy for the user.
10.09.2025 21:02 β π 0 π 0 π¬ 1 π 0
Right now, I'm starting to see the optimization patterns emerge with GPT5. It feels VERY different than GPT 4. It also feels VERY different than any Claude gen 4 model.
10.09.2025 21:02 β π 0 π 0 π¬ 1 π 0
GPT5 feels more meh because it's actually good enough that people don't know how to push it yet. It's kind of like when early game consoles would have their first iteration of games and then developers learn how to build software and optimize for the hardware.
10.09.2025 21:02 β π 0 π 0 π¬ 1 π 0
My read on the GPT5 launch is a consensus: Meh.
I think that's wrong though. Think about the evolution thusfar:
GPT3 was like "wow these sentences are coherent!" 3.5 was like "wow this is actually useful!" 4 was like "Whoa it's useful and doesn't hallucinate usually!"
10.09.2025 21:02 β π 0 π 0 π¬ 1 π 0
High should be reserved for thoughtful tradeoffs. Code reviews are really really useful to run in high, and then go back and have medium or low implement. (If it's straightforward enough, honestly minimum is fine too.)
10.09.2025 16:03 β π 0 π 0 π¬ 0 π 0
Don't always use the highest thinking LLM to do your job. The pattern I've found that works for code:
GPT-5 minimum for finding all the files required and building the context. Low for writing initial code / unit tests / docs
If low gets stuck, go to medium.
When to use high?
10.09.2025 16:03 β π 0 π 0 π¬ 1 π 0
Codex's actual output is just simply better than Claude Code. Iβm consistently using low, then having high go back and code review, and it just looks like code that Iβd be proud of writing at FB. Itβs insane.
10.09.2025 07:58 β π 0 π 0 π¬ 0 π 0
Thatβs probably net a good thing for attention, but it makes it seem a lot slower. For codex to be a daily driver, you have to start on minimal or low, build out the context / project memo, then upshift into high and let it roll.
09.09.2025 20:56 β π 0 π 0 π¬ 0 π 0