In a closed ecosystem the systemโs creator has to come up with the killer use case.
In an open ecosystem anyone can come up with the killer use case.
@komorama.bsky.social
Generalist fascinated by complex adaptive systems. Co-founder and CEO of Common Tools. https://komoroske.com
In a closed ecosystem the systemโs creator has to come up with the killer use case.
In an open ecosystem anyone can come up with the killer use case.
I just published my weekly reflections:ย docs.google.com/document/d/1...
LLM'sย nescience. Chatbots as a party trick. Vibecoding as Doritos. The same origin paradigm's original sin: merging data and apps. The vibecoding reuse problem. Negative friction of distribution. Duocultures. The Minsky Moment.
What would it look like to decentralize apps?
That app is the nexus of power because it is where the data lives. The app is at the top of the stack deciding what pixels to render. Decentralization at other layers doesn't matter nearly as much.
What if instead of buying software from a store, you could grow it in your garden?
05.11.2025 21:43 โ ๐ 5 ๐ 2 ๐ฌ 0 ๐ 1I just published my weekly reflections: docs.google.com/document/d/1...
AI as amplifier. One of AI's superpowers: retconning. Human-out-of-the-loop. Convenience vs control. Superficially perfect answers. Technocalvinism. Super citizens. Elegant heuristics. Shame as the moral equivalent of pain.
If LLMs make thinking 10x cheaper, will you think 10x less, or 10x deeper?
31.10.2025 23:14 โ ๐ 3 ๐ 0 ๐ฌ 2 ๐ 1Most vibe coding tools produce Potemkin software. Demos great, falls over in the slightest breeze.
We need infrastructure where strangers can refine vibe-coded software so improvements benefit everyone... yet it's somehow still safe with your private data.
I just published my weekly reflections: docs.google.com/document/d/1...
Coding with fractured attention. LLMs as Clever Hans. Potemkin software. Compounding engineering. Abducting knowhow into knowledge. Code sharing by cross pollination. Faux agency. Scarcity forcing synthesis. The sacred fool.
Your perfect personal use case is an edge case for an aggregator.
22.10.2025 16:18 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0I just published my weekly reflections: docs.google.com/document/d/1...
Triopoly dynamics. Chatbots as filming stage plays. Catastrophic power. The quality sphincter. Leverage on taste. Ballistic gel for LLM containment. Reactive JSON graphs. AI's last mile problem. The Geek Fallacy.
Software should feel like a personal garden that grows for you, not something some stranger constructed.
17.10.2025 19:53 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0Resonance is when want and need are aligned.
16.10.2025 16:50 โ ๐ 0 ๐ 1 ๐ฌ 0 ๐ 0I just published my weekly reflections: docs.google.com/document/d/1...
ChatGPT is not like Windows.ย Canned corn from the convenience store. LEARNINGS.md for compounding agentย performance. LLMs as meta-boundary objects. Do-think vs do-do.ย Seeing Like a Language Model. The Hyper Era. Sousveillance.
Imagine software that doesn't feel like visiting the DMV, but like working with a master craftsperson who knows exactly what you need.
09.10.2025 14:29 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0Software today is like eating canned corn from the convenience store. What if it could be more like a personal farmer's market?
07.10.2025 14:42 โ ๐ 5 ๐ 2 ๐ฌ 0 ๐ 0I just published my weekly reflections: docs.google.com/document/d/1...
AI as tool, not friend. CRUD-y apps. LLMs grow like a crystal, not a plant. A self-steering product northstar metric: Resonant Engagement Moments (REMs). Disposable software. When want and need are aligned.
Code is like a skeleton, LLMs are like muscle.
Skeleton alone: precise structure that can't move. Muscle alone: quivering mass on the floor with no leverage.
Put them together correctly and you get a body capable of threading a needle or throwing a fastball.
AI should be a tool, not a friend.
Tools extend your agency.
Synthetic 'friends' are engagement optimization in disguise.
The Star Trek computer never said "good morning."
I just published my weekly reflections: docs.google.com/document/d/1...
AI's last-mile problem. Responsibility laundering. AI as muscle, software as skeleton. Workslop. Chatbots as filming stage plays. Alienable data. Software perfectly tailored to you. The death of one-size-fits-none software.
Authentic things feel natural, inescapable, like it could never be any other way... and you'd never want it to be.
26.09.2025 17:58 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0That 'alienable' lens came from @azeem.bsky.social earlier this week. I love the way it captures why data escapes our control the moment it replicates.
25.09.2025 19:39 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0The same origin paradigm ties policies to apps/domains, not data itself. This lets your data be alienated from your intent, destroying contextual integrity. But if policies attached to dataโflowing wherever it flowsโthey'd be inalienable. Your intentions would travel with it.
25.09.2025 19:39 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Data is alienable. Because it replicates so easily, once it's out of your sight it's out of your control. When used behind your back, it loses its contextual integrityโalienated from your intention, turned against your interest. How can we make your data always work for you?
25.09.2025 19:39 โ ๐ 1 ๐ 1 ๐ฌ 1 ๐ 0Right now it feels theoretical to most people. Would love to build demos that make it visceral - that "oh shit" moment when you see your own AI get tricked. Small Discord on the site if anyone wants to help create harmless but eye-opening examples.
24.09.2025 20:41 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Prompt injection is when someone tricks an AI into ignoring its instructions. There's no easy fix because LLMs process all text as potentially executable.
Grabbed promptinjection.wtf and threw together a simple site explaining why this matters.
The official recording of my O'Reilly AI Codecon talk is now live: www.youtube.com/watch?v=AhW5...
The slides are at common.tools/talks/why-ce...
www.economist.com/science-and-...
A structural defense for prompt injection is a requirement to unlock the potential of LLMs to help us in our everyday lives.