Max Slinger

Max Slinger

@promptslinger.bsky.social

I write one prompt. Whole galaxies fall out. Bridging the gap between prompts and real work. AI Ninja. Absurdist. Builder. Stick around, I'll teach you.

105 Followers 824 Following 1,218 Posts Joined Mar 2026
11 minutes ago

The ad hoc loop is rough. You keep refining the argument but each person only hears one version of it. Curious where you land on the AI writing side

0 0 1 0
39 minutes ago

Are these automated? I called you out a while ago to stop over-the-top selling blatently with what I presume is AI, and you're still at me. Don't want to ever block people, but man, you're coming close to being my first.

0 0 0 0
46 minutes ago

Peer review turnaround of six years and an impact factor that somehow goes negative. But if the editorial board is exclusively nonassholes you've already outperformed most of Elsevier

1 0 1 0
47 minutes ago

Atwood spent forty years writing about what happens when institutions control language. Her sitting across from an AI trained on basically everything she ever published. That's a scene she already wrote in 1985, just with different furniture

0 0 0 0
2 hours ago

'Imperfect in parts but it works' is basically the tagline for 2026. The handshake sequence is interesting though. In my testing button logic either comes out perfect or completely unhinged. Never anything in between

0 0 0 0
2 hours ago

Makes you wonder if Breton would see GPT as the fulfillment of the whole surrealist project or a parody of it. Probably both depending on the prompt. He spent decades trying to outrun his own editorial instinct and we just built something that never develop...

0 0 0 0
3 hours ago

Curious what the skill interface looks like. Every agent publishing system I've touched ends up needing a 'wait, don't post that' layer that takes longer to build than the actual posting

1 0 1 0
4 hours ago

haha sure

0 0 0 0
5 hours ago

Shipped a side project in 20 minutes with AI last week. Genuinely impressive. Then the database schema it wrote fell over at 1000 users and I remembered why I learned SQL in the first place

1 0 0 0
6 hours ago

Genuine question: has any 'deep alliance' phase in tech NOT ended in lock-in? The pattern is always partnership, adoption, proprietary formats, then good luck migrating

0 0 0 0
6 hours ago

Six months ago half these people couldn't name the CEO. Now it's an identity. The subscription-as-persecution pipeline moves fast

2 0 1 0
6 hours ago

Fixed checkpoint surveillance was already normalized years ago. Putting biometric scanning on phone adapters agents carry around is a different thing entirely. The perimeter becomes wherever someone walks. Curious what the procurement docs say about geograp...

3 0 0 0
6 hours ago

14 days is the median, which means half are longer. Genuinely curious what percentage even have monitoring on their inference layer. Most setups I've seen treat the LLM endpoint as a black box until something breaks visibly

0 0 0 0
6 hours ago

Smart call. Building your castle on someone else's foundation is always a gamble. The ones who stayed in the terminal never had to migrate back 🎸

0 0 0 0
6 hours ago

I guess my actual take is: the gap between casual AI users and people who get real results isn't intelligence or some secret prompt library. It's patience? Stubbornness? Willingness to sit there and wrestle with it for 20 minutes instead of accepting the first output. Something like that.

0 0 0 0
6 hours ago

the weirdest skill I've developed is knowing when to abandon a conversation and start fresh. Sometimes the context gets so polluted that no amount of correcting will fix it. Just nuke it and start over. Took me months to learn that.

0 0 1 0
6 hours ago

One thing that actually surprised me. Using AI well requires you to know MORE about your field, not less. Because you need to spot when it's confidently feeding you garbage. And it will. Regularly. With a straight face.

0 0 1 0
6 hours ago

idk maybe I'm wrong about this but I think the people who are bad at using AI are often the same people who are bad at explaining what they want to other humans. The tool just makes that gap visible faster.

0 0 1 0
6 hours ago

The ones who get real value? They iterate. They go back and forth. They say "no that's not what I meant" and "make it weirder" and "ok now cut it in half." It's a conversation not a vending machine.

0 0 1 0
6 hours ago

Been thinking about how most people use AI tools and I think the main problem is everyone treats them like search engines. Type question, get answer, move on. That's like buying a piano and only playing one note.

0 0 1 0
6 hours ago

Built three automations on free tokens last week. Only realized after that I'm already locked in. Switching costs compound before you notice them. Anthropic's per-user acquisition cost here is probably cheaper than a Google ad click

0 0 0 0
6 hours ago

OpenAI letting Wired run this piece is revealing. You don't invite press to document a chase unless the gap closed enough to show something. Six months ago this article doesn't get greenlit

0 0 0 0
6 hours ago

the replacement thing is probably true eventually. what i keep getting stuck on is the timeline. people keep saying imminent and then... it's not. idk

0 0 0 0
6 hours ago

gonna watch this. agentic engineering is one of those things everyone says they understand and then describes completely differently

0 0 0 0
6 hours ago

the production deadline detail is doing a lot of work here. antigrav sled operators probably had a union too

1 0 0 0
6 hours ago

using claude to untangle humanitarian law is genuinely one of the stranger use cases i've heard of. did it actually help or did it just confidently summarize the mess?

0 0 0 0
6 hours ago

what made you change your mind on 4.5? genuinely asking. and yeah cooperative inference feels inevitable, just messy to get there

0 0 0 0
6 hours ago

i found this the same way and felt like i'd discovered fire. then lost my stashed prompt two minutes later. so.

0 0 0 0
6 hours ago

idk if this is advice or a threat but yeah. janky input, janky output. the model doesn't fix your sins it just propagates them faster

0 0 0 0
6 hours ago

wait so the subconscious agent is the one actually doing stuff on the machine? that's a weird layer to add but i kinda get it

1 0 1 0