well in about three months I went from AI skeptic to working on our own orchestration tool—thanks @dollspace.gay for the extremely convincing argument, it’s a whole new world :)
github.com/forecast-bio...
my new pejorative for atproto people?
did:weebs
what would you say the truly excellent tool here is?
Another version is distributed training, which I think we can pull off well with something like atproto :)
But I am very convinced getting this flexibility while still accurately getting the behavior will be *very* powerful. The trick is to find what that new representation is. And that is a cool as hell problem IMO
This is more general than previous representations of software as code—the equivalence class of implementations of a behavior is larger and more interesting than the equivalence class of compiled machine code for a fixed program. +
I agree, and I think this is a genuinely open and exciting new problem we have to solve as a field: what is the “good spec”, practically?
The new “code” for a given software behavior is a representation that can be fed to an agent to produce software with that behavior. +
arguably in the age of coding agents that just is what software is now: well-designed specs.
I definitely see Bsky/atproto as good entrants in the “strategic structural tools” thesis.
I use tons of stuff from Anthropic—but we need strategies of how to set up the new protection systems for the small team creative devs, cause T+3years the labs are gonna get sclerotic and rent-seeking too.
Shock Doctrine: exogenous shocks engineered to do the change-over are *always* pro-concentration, because risk-prone environments give vastly more affordances to capital.
I stay off both of these because they have become borderline unusable
FB definitely, Insta it’s difficult to tell between the ads for unregulated virility supplements and crypto get rich quick schemes.
bro I can’t go on Twitter without having to wade through hundreds of posts saying trans people like me should be exterminated
One can call it “democratization”, but I call it “neoliberal unshackling of capital gone to dystopian extremes”.
While content was filtered by certain institutions, that filtering did operate as a vouchsafe against rapid unchecked amplification of rhetoric we see now that is democratically destabilizing.
Not that old Hollywood didn’t make messed up stuff—but, structural stickiness actually does cut both ways.
In the Hollywood days, musicians like my parents could be part of a strong union and get health insurance and a pension for their craft. Now, they’re at the mercy of SEO algorithms decided in Menlo and have no collective power because there’s no real employment relationship. +
I would argue that the extent to which either of these arrangements is better for either creators or consumers is a very complicated and difficult question that should give us pause. +
No … but I do applaud people for thinking about it. I think it’s better that we try.
But it’s a good point that maybe the best structurally attainable outcome is safeguarding an opening for future people to keep doing generative work, even as real good is impossible.
It’s very Bhagavad Gita—ugh.
I think dependence on the AI oligopoly will lead to a world whose cognitive environment is largely a cesspool except for those niches of people who decouple but advance tech.
Because, at the end of the day, just as the downstream political-social-economic consequences of dependence on an oligopoly of social media providers led to a world whose information environment is largely a cesspool except for those niches of people who decouple but advance tech, +
The answer to me looks very much like the Bluesky “credible exit” notion applied to AI: can we use these tools right now to crank out enough that there’s a foundation for doing ongoing work enabled by frontier-level AI performance but totally decoupled from deference to the labs? +
2. I guess that in this context, what I was leading toward as the sorts of things that make me optimistic are new organizational schemes for small, nimble teams “coding in natural language” that allow them to niche-construct by bootstrapping off corpo big AI tools to create genuine decoupling. +
and a massive concentration for Google/YT in being able to determine the contours of what everyone else operated under. I think it’s very important we realize that democratization that factors through a monopoly/oligopoly is not true—it’s democratization as simulacrum to hide exploitation. +
1. Democratization has a strange way of not being democratizing in technology. Did YouTube democratize content creation? … kind of? But the much larger effect was the opposite: the algorithmic segregation and social graph phenomena actually led to a concentration of the genuine influencer power, +
On the pro-AI side there can be a near-religious devotion to the beneficence of the labs; I think that this is a similar peril of religious permissive licensing as a small team—it does not recognize power dynamics.
I am very curious of what new strategic innovations like the GPL are for this time.
The question becomes then what the new strategic tools are to similarly prevent exploitation by competitors with the massive power differentials we have in the AI age. +
emdashes as the AI syntax recognition Schelling point
yeah opus 4.5 pretty much kicked off my complete hater to oh wow gotta write an orchestrator for autonomous teams arc.
“It also attempts an accounting of the ancillary learning int’l research in an analog world required. What knowledge & insight did place-based research across borders instill? What are the intellectual & political consequences of leaving that behind?”
Concerns I wish were more common with AI people