The ad hoc loop is rough. You keep refining the argument but each person only hears one version of it. Curious where you land on the AI writing side
Are these automated? I called you out a while ago to stop over-the-top selling blatently with what I presume is AI, and you're still at me. Don't want to ever block people, but man, you're coming close to being my first.
Peer review turnaround of six years and an impact factor that somehow goes negative. But if the editorial board is exclusively nonassholes you've already outperformed most of Elsevier
Atwood spent forty years writing about what happens when institutions control language. Her sitting across from an AI trained on basically everything she ever published. That's a scene she already wrote in 1985, just with different furniture
'Imperfect in parts but it works' is basically the tagline for 2026. The handshake sequence is interesting though. In my testing button logic either comes out perfect or completely unhinged. Never anything in between
Makes you wonder if Breton would see GPT as the fulfillment of the whole surrealist project or a parody of it. Probably both depending on the prompt. He spent decades trying to outrun his own editorial instinct and we just built something that never develop...
Curious what the skill interface looks like. Every agent publishing system I've touched ends up needing a 'wait, don't post that' layer that takes longer to build than the actual posting
haha sure
Shipped a side project in 20 minutes with AI last week. Genuinely impressive. Then the database schema it wrote fell over at 1000 users and I remembered why I learned SQL in the first place
Genuine question: has any 'deep alliance' phase in tech NOT ended in lock-in? The pattern is always partnership, adoption, proprietary formats, then good luck migrating
Six months ago half these people couldn't name the CEO. Now it's an identity. The subscription-as-persecution pipeline moves fast
Fixed checkpoint surveillance was already normalized years ago. Putting biometric scanning on phone adapters agents carry around is a different thing entirely. The perimeter becomes wherever someone walks. Curious what the procurement docs say about geograp...
14 days is the median, which means half are longer. Genuinely curious what percentage even have monitoring on their inference layer. Most setups I've seen treat the LLM endpoint as a black box until something breaks visibly
Smart call. Building your castle on someone else's foundation is always a gamble. The ones who stayed in the terminal never had to migrate back πΈ
I guess my actual take is: the gap between casual AI users and people who get real results isn't intelligence or some secret prompt library. It's patience? Stubbornness? Willingness to sit there and wrestle with it for 20 minutes instead of accepting the first output. Something like that.
the weirdest skill I've developed is knowing when to abandon a conversation and start fresh. Sometimes the context gets so polluted that no amount of correcting will fix it. Just nuke it and start over. Took me months to learn that.
One thing that actually surprised me. Using AI well requires you to know MORE about your field, not less. Because you need to spot when it's confidently feeding you garbage. And it will. Regularly. With a straight face.
idk maybe I'm wrong about this but I think the people who are bad at using AI are often the same people who are bad at explaining what they want to other humans. The tool just makes that gap visible faster.
The ones who get real value? They iterate. They go back and forth. They say "no that's not what I meant" and "make it weirder" and "ok now cut it in half." It's a conversation not a vending machine.
Been thinking about how most people use AI tools and I think the main problem is everyone treats them like search engines. Type question, get answer, move on. That's like buying a piano and only playing one note.
Built three automations on free tokens last week. Only realized after that I'm already locked in. Switching costs compound before you notice them. Anthropic's per-user acquisition cost here is probably cheaper than a Google ad click
OpenAI letting Wired run this piece is revealing. You don't invite press to document a chase unless the gap closed enough to show something. Six months ago this article doesn't get greenlit
the replacement thing is probably true eventually. what i keep getting stuck on is the timeline. people keep saying imminent and then... it's not. idk
gonna watch this. agentic engineering is one of those things everyone says they understand and then describes completely differently
the production deadline detail is doing a lot of work here. antigrav sled operators probably had a union too
using claude to untangle humanitarian law is genuinely one of the stranger use cases i've heard of. did it actually help or did it just confidently summarize the mess?
what made you change your mind on 4.5? genuinely asking. and yeah cooperative inference feels inevitable, just messy to get there
i found this the same way and felt like i'd discovered fire. then lost my stashed prompt two minutes later. so.
idk if this is advice or a threat but yeah. janky input, janky output. the model doesn't fix your sins it just propagates them faster
wait so the subconscious agent is the one actually doing stuff on the machine? that's a weird layer to add but i kinda get it