The bet by investors like Y Combinator, in other words, is on a broadly familiar form of disruption but with a couple of twists.
Generative AI may expose some incumbent firms to outpricing, outselling, and outmaneuvering by start-ups that embrace it, while more and more people acclimate to strange, new, chatcentric modes of computing. In the meantime, it may also help glut, jam, and temporarily destroy the entire markets in which they're operating. It's a version of creative destruction in which the destruction is preemptive, indiscriminate, and maybe looks a bit like sabotage โ with a little bit of help from all of us, of course.
implied agentic ai pitch: we will lure users with unrealistic promises, and in the process turn them into spammers that destroy the entire existing marketplace. then... profit?
10.10.2025 13:13 โ ๐ 37 ๐ 5 ๐ฌ 1 ๐ 4
This attitude toward externalities โ not my problem, and in any case worth it in exchange for a small advantage โ follows the approximate logic of a spammer and often comes wrapped in the language of AI hustle culture. It's also understandable from the perspective of a job seeker who feels constantly thwarted by automated systems employers use that seem to treat seekers with similar indifference or contempt, or by platforms like LinkedIn and Indeed that, while nominally intended to connect two parties with shared interests (one needing specific services, the other offering them), can feel more like social-media-style black holes for engagement. It's an escalation that will likely be met with more escalation: countermeasures by job-listings platforms and hirers to prevent access by AI agents; more aggressive automated filtering; different hiring routines altogether, making it even harder to get through the door to a coveted interview. Mercenary (and slightly deceptive) automation tools like this, which are being pitched all over right now and already wreaking havoc in, for example, online dating, depend on two temporary circumstances to work, if they ever actually do: (1) that most other people don't have access to them, giving the user an edge and (2) that the people and parties on which they're used will tolerate and take no action against them. In other words, if you take their pitches at face value, they're pretty obviously doomed in the medium term, in the sense that they'll either be rejected by the systems they operate in or simply ruin them for everyone.
Taking stock of the first few years of mainstream AI deployment, though, raises an important question. What if that's sort of the point? Or at least a world worth thinking about in a more thorough, long-term way? Generative image and video tools, for example, have significantly degraded social-media platforms, allowing bad actors and regular people to fill them with slop, intensifying existing problems with spam and deceptive content while thwarting old solutions. And, hey, look at that: Suddenly, OpenAI and Meta are launching new social networks based on AI, on which posting generated content is the point, not a problem to be solved. Generative AI may be placing immense stress on educational institutions and worsening the already strained relationships between teachers and students, but wait โ every AI company is selling ed tech now.
trying to think through what's unusual about so many AI startups, and about the gaps between their pitches, the real world, and what they would actually need to succeed nymag.com/intelligence...
10.10.2025 13:13 โ ๐ 47 ๐ 13 ๐ฌ 3 ๐ 2
AI companies are actually AI gap arbitrage companies
10.10.2025 13:34 โ ๐ 22 ๐ 8 ๐ฌ 0 ๐ 0
The bet by investors like Y Combinator, in other words, is on a broadly familiar form of disruption but with a couple of twists.
Generative AI may expose some incumbent firms to outpricing, outselling, and outmaneuvering by start-ups that embrace it, while more and more people acclimate to strange, new, chatcentric modes of computing. In the meantime, it may also help glut, jam, and temporarily destroy the entire markets in which they're operating. It's a version of creative destruction in which the destruction is preemptive, indiscriminate, and maybe looks a bit like sabotage โ with a little bit of help from all of us, of course.
implied agentic ai pitch: we will lure users with unrealistic promises, and in the process turn them into spammers that destroy the entire existing marketplace. then... profit?
10.10.2025 13:13 โ ๐ 37 ๐ 5 ๐ฌ 1 ๐ 4
This attitude toward externalities โ not my problem, and in any case worth it in exchange for a small advantage โ follows the approximate logic of a spammer and often comes wrapped in the language of AI hustle culture. It's also understandable from the perspective of a job seeker who feels constantly thwarted by automated systems employers use that seem to treat seekers with similar indifference or contempt, or by platforms like LinkedIn and Indeed that, while nominally intended to connect two parties with shared interests (one needing specific services, the other offering them), can feel more like social-media-style black holes for engagement. It's an escalation that will likely be met with more escalation: countermeasures by job-listings platforms and hirers to prevent access by AI agents; more aggressive automated filtering; different hiring routines altogether, making it even harder to get through the door to a coveted interview. Mercenary (and slightly deceptive) automation tools like this, which are being pitched all over right now and already wreaking havoc in, for example, online dating, depend on two temporary circumstances to work, if they ever actually do: (1) that most other people don't have access to them, giving the user an edge and (2) that the people and parties on which they're used will tolerate and take no action against them. In other words, if you take their pitches at face value, they're pretty obviously doomed in the medium term, in the sense that they'll either be rejected by the systems they operate in or simply ruin them for everyone.
Taking stock of the first few years of mainstream AI deployment, though, raises an important question. What if that's sort of the point? Or at least a world worth thinking about in a more thorough, long-term way? Generative image and video tools, for example, have significantly degraded social-media platforms, allowing bad actors and regular people to fill them with slop, intensifying existing problems with spam and deceptive content while thwarting old solutions. And, hey, look at that: Suddenly, OpenAI and Meta are launching new social networks based on AI, on which posting generated content is the point, not a problem to be solved. Generative AI may be placing immense stress on educational institutions and worsening the already strained relationships between teachers and students, but wait โ every AI company is selling ed tech now.
trying to think through what's unusual about so many AI startups, and about the gaps between their pitches, the real world, and what they would actually need to succeed nymag.com/intelligence...
10.10.2025 13:13 โ ๐ 47 ๐ 13 ๐ฌ 3 ๐ 2
Sora is presumably extremely expensive to run, hence OpenAl's use of a chained invite program to roll it out. In its early form, it brings to mind the early days of image generators like Midjourney, the video model for which Meta is now borrowing for Vibes. Like Sora, Midjourney in 2022, was a fascinating demo that was, for a few days, really fun to mess with for a lot of the same reasons:
A vast majority of the images I've generated have been jokes โ most for friends, others between me and the bot. It's fun, for a while, to interrupt a chat about which mousetrap to buy by asking a supercomputer for a horrific rendering of a man stuck in a bed of glue or to respond to a shared Zillow link with a rendering of a "McMansion Pyramid of Giza..?
...I still use Midjourney this way, but the novelty has worn off, in no small part because the renderings have just gotten better โ less
"strange and beautiful" than "competent and plausible." The bit has also gotten stale, and I've mapped the narrow boundaries of my artistic imagination.
Playing with Sora is a similar experience: a destabilizing encounter with a strange and uncomfortable technology that will soon become ubiquitous but also rapidly and surprisingly banal. It also produces similar results: a bunch of generations that are interesting to you and your friends but look like slop to anyone else. The abundant glitches, like my avatar's tendency to include counting in all dialogue, help make the generations interesting. Many of the videos that are compelling beyond the context of their creation are interesting largely as specimens or artifacts โ that is, as examples of how a prompt ("sam altman mounted to the wall like a big mouth Billy bass, full body") gets translated into ... something. (As one friend noted, many of these videos become unwatchable if you can't see what the prompt was.)
it's only been a week but I'd bet a lot of early users are having experiences like this with Sora: manic, exploratory onboarding followed by something like a hangover. it's a pattern! nymag.com/intelligence...
06.10.2025 16:32 โ ๐ 6 ๐ 0 ๐ฌ 1 ๐ 2
it was never a great situation, but I think a lot of users forget that the better days of twitter involved tension between users and the platform โย the authentically good accounts and communities were always breaking it in small ways. was also torn between public markets and a frustrating weirdo ceo
03.10.2025 14:59 โ ๐ 15 ๐ 0 ๐ฌ 0 ๐ 0
part of the big theoretical "twitter do-over" is this inevitable jack-brained dynamic, too
03.10.2025 14:53 โ ๐ 17 ๐ 0 ๐ฌ 1 ๐ 0
You're right, and as I mentioned in the post it's a real and interesting issue to work out. But I think solving the privacy issue with blanket confidentiality for conversations in which OpenAI's automated platform is analogous to the licensed professional would be hugely legally helpful to the firm
02.10.2025 15:32 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Right, but the takedown paradigm is about UGC, and AI interactions have much more in common with direct publication โย also, even in pure UGC environments, straightforward takedown compliance wasn't always enough for rightsholders (YouTube ContentID and its many children)
02.10.2025 15:27 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
The opt-out process for the new version of Sora means that movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyright material in videos the tool creates.
This is pretty close to the maximum possible bid OpenAI can make here, in terms of its relationship to copyright โ a world in which rights holders must opt out of inclusion in OpenAI's model is one in which OpenAl is all but asking to opt out of copyright as a concept. To arrive at such a proposal also seems to take for granted that a slew of extremely contentious legal and regulatory questions will be settled in OpenAI's favor, particularly around the concept of "fair use." Al firms are arguing in court โ and via lobbyists, who are pointing to national-security concerns and the Al race with China
- that they should be permitted not just to train on copyrighted data but to reproduce similar and competitive outputs. By default, according to this report, OpenAI's future models will be able to produce images of a character like Nintendo's Mario unless Nintendo takes action to opt out. Questions one might think would precede such a conversation โ how did OpenAI's model know about Mario in the first place? What sorts of media did it scrape and train on? โ are here considered resolved or irrelevant.
Coming from Altman, though, it assumes an additional meaning: He would very much prefer that his company not be liable for potentially risky or damaging conversations that its software has with users. In other words, he'd like to operate a product that dispenses medical and legal advice while assuming as little liability for its outputs, or its users' inputs, as possible โ a mass-market product with the legal protections of a doctor, therapist, or lawyer but with as little responsibility as possible. There are genuinely interesting issues to work out here. But against the backdrop of numerous reports and lawsuits accusing chatbot makers of goading users into self-harm or triggering psychosis, it's not hard to imagine why getting blanket protections might feel rather urgent right now.
On both copyright and privacy, his vision is maximalist: not just total freedom for his company to operate as it pleases, but additional regulatory protections for it as well. It's also probably aspirational โ we don't get to a copyright free-for-all without a lot of big fights, and a chatbot version of attorney-client privilege is the sort of thing that will likely arrive with a lot of qualifications and caveats. Still, each bid is characteristic of the industry and the moment it's in. So long as they're building something, they believe they might as well ask for everything.
OpenAI's maximalist "asking for infinite money" strategy has a legal counterpart, too nymag.com/intelligence...
01.10.2025 13:27 โ ๐ 12 ๐ 1 ๐ฌ 2 ๐ 0
wrong! history is actually a series of binding essays
18.09.2025 19:16 โ ๐ 7 ๐ 0 ๐ฌ 0 ๐ 0
no joke, having two toddlers was when this really kicked me in the gut. two lines crossing on a really depressing graph
18.09.2025 15:06 โ ๐ 11 ๐ 0 ๐ฌ 0 ๐ 0
with the same personal caveats and granting the limits of the perspective of a child: watching so much of the generation of gruff, stoic, intelligent men I grew up around lose complete control and turn into addled neurotic news freaks has been disconcerting and a little surprising
18.09.2025 14:57 โ ๐ 27 ๐ 2 ๐ฌ 1 ๐ 0
Was Tyler Robinson radicalized online? To whatever extent it's a meaningful, coherent question, the best answer we have, based on what little evidence we have โ and with the expectation that more will be revealed in the coming days, weeks, and months โ is: maybe? We don't know. If and until we find out, though, the days since Kirk's shooting have posed a related question: What about everyone else?
If the popular theory of online radicalization is that people lose touch with their humanity - or are merely driven to ideological extremity or total alienation - in insular, ideological accelerative communities, there were abundant examples of this phenomenon in the week following Kirk's shooting. Seconds after Kirk was killed, a TikTok influencer streamed the scene mere feet away from the stage where it happened, frantically commanding viewers to subscribe to his channel as panicked bystanders screamed and fled. In the early stages of the manhunt, politically motivated users on X, and on forums like 4chan and Kiwi Farms, began constructing theories about how the shooter must have been trans and therefore predisposed to violence, laundering a preexisting shared obsession into the broader social-media conversation. Elsewhere on social media, defensive leftists convinced one another, with scant evidence and clear motivation, that Kirk must have been killed by a follower of far-right personality Nick Fuentes, who had feuded with him for years. (It seems like a version of this theory may have found its way to the writer's room at Jimmy Kimmel Live!, drawing the attention of FCC chair Brendan Carr and leading ABC to take the show off the air
"indefinitely.")
Before and after Robinson was identified, a small but not insignificant number of people cheered Kirk's death on public social-media channels, while right-wing internet users responded with an unprecedented mass doxxing campaign, collecting these posts and trying to get the people behind them fired, kicked out of school, or deported.
This was all cheered on by the J.D. Vance - "call them out, and hell, call their employer," the vice-president said while hosting Charlie Kirk's show. It was gleefully justified as revenge for the left's years of"cancel culture," and was accordingly broad, sweeping up people who had merely criticized Kirk, as well as bystanders who had, in fact, said nothing at all. Elsewhere on social media, and especially on X, some users simply โ and comfortably โ called for direct retribution and declared that the time had come for a national divorce and civil war.
Animated, horrified, inspired, or merely unmoored into a state of disorientation by the shocking (and widely viewed) murder of a polarizing figure, a vast range of people revealed, on purpose or by accident, how alien to one another their normative experiences had become, online and off. Many surely wondered, from each edge of this nightmarish discourse โ although perhaps not as much as people who didn't enter the fray at all โ in what world are people posting this stuff? In death as in life, Charlie Kirk has left a lot of Americans struggling to understand how their fellow citizens could possibly say or believe the things they do.
I donโt mean to suggest equivalence here. An obsessive, metrics-driven influencer is a freakish cautionary tale, not a militant slipping into terroristic threats. You might question the character or judgment of someone who publicly celebrates the death of public figure they hate, or vents crudely or angrily about their political opponents after seeing a video of the murder of an online personality they love or identify with, but theyโre not participating in an organized online campaign, supported by elected officials, to have their enemies โthrown from civil society,โ kicked out of school, fired from their jobs, or stripped of their driverโs licenses.
Fearful elected Democrats have been far more careful than scattered liberals and leftists on social media, retreating to platitudes about how violence is bad and speech should never be met with violence, and have been harshly censured by Republicans in cases where they offered anything but praise for Kirk, as in the case of representative Ilhan Omar โ the right is, as usual, more aligned. The president claimed that the โradical leftโ was โdirectly responsibleโ for an act of โterrorismโ and pledged to go after people and organizations that โcontributed to this atrocity and to other political violence.โ His closest adviser, Stephen Miller, was more direct and spoke in the same voice as the burn-it-all-down contingent of Kirkโs online fandom. Kirkโs death was the result of โa vast domestic-terror movement,โ he told Vance on Kirkโs show, before promising to use โevery resource we have at the Department of Justice, Homeland Security, and throughout this government to identify, disrupt, dismantle, and destroy these networks and make America safe again for the American people.โ
Fascinating to watch people strain to assemble a "radicalized online" theory from thin and incomplete evidence as they post amid some of the most openly vengeful, bloodthirsty, "radical" social feeds in memory nymag.com/intelligence...
18.09.2025 13:16 โ ๐ 14 ๐ 1 ๐ฌ 1 ๐ 0
it did lots of other things, too, but the positives tend to accrue in the form of individual success or satisfaction: these are fundamentally grimy marketplaces masquerading as communities, just a total disaster
18.09.2025 14:36 โ ๐ 7 ๐ 1 ๐ฌ 0 ๐ 0
strong agree, from 2017: opaque commercial social platforms becoming the most vivid, accessible representation of discourse, democracy, the public sphere, the media, etc was a massive accelerant for cynicism/resentment/despair www.nytimes.com/2017/08/21/m...
18.09.2025 14:34 โ ๐ 16 ๐ 5 ๐ฌ 1 ๐ 0
Of course, we don't really expect the many thousands of people saying this kind of stuff about the killing of a widely beloved and despised famous person to one day resort to violence. We understand them as downstream not just from a specific event and its fallout but from a terrifying and heavily politicized decline in Americans' trust and regard for one another. (Also, it should be said, as relatively rare public expressions of private sentiments more people might confide to friends or family, who are more likely to agree, understand, or simply feel comfortable knowing that they don't mean much.)
This should give us pause about the ritual of assigning such posts so much value in cases where violence actually occurs.
As a general theory explaining the ills of the world, "online radicalization" is a comforting fiction without predictive power, a way to contain shocking events with complicated, overlapping motivations inside arbitrary digital boundaries, a tempting method for dismissing deeper and more sinister problems as mere symptoms of a vague and external online menace, and as a tool for political blame or vindication.
"The internet" can usually no more provide us with the full story of why someone decided to kill another person than their isolated experiences at work, school, or home, which we hardly struggle to see as entwined and inseparable. But the grim incentives and cynical priorities of the commercial platforms that make it up might help explain how we come to see one another as irredeemable.
The lesson isn't "EVERYONE is being radicalized by the internet" โย a similarly appealing, pat, useless framework except for assigning blame and justifying censorship and privacy crackdowns โ but rather a warning against imagining the internet as a distinct place or force. We've got bigger problems!
18.09.2025 13:16 โ ๐ 7 ๐ 0 ๐ฌ 0 ๐ 0
Was Tyler Robinson radicalized online? To whatever extent it's a meaningful, coherent question, the best answer we have, based on what little evidence we have โ and with the expectation that more will be revealed in the coming days, weeks, and months โ is: maybe? We don't know. If and until we find out, though, the days since Kirk's shooting have posed a related question: What about everyone else?
If the popular theory of online radicalization is that people lose touch with their humanity - or are merely driven to ideological extremity or total alienation - in insular, ideological accelerative communities, there were abundant examples of this phenomenon in the week following Kirk's shooting. Seconds after Kirk was killed, a TikTok influencer streamed the scene mere feet away from the stage where it happened, frantically commanding viewers to subscribe to his channel as panicked bystanders screamed and fled. In the early stages of the manhunt, politically motivated users on X, and on forums like 4chan and Kiwi Farms, began constructing theories about how the shooter must have been trans and therefore predisposed to violence, laundering a preexisting shared obsession into the broader social-media conversation. Elsewhere on social media, defensive leftists convinced one another, with scant evidence and clear motivation, that Kirk must have been killed by a follower of far-right personality Nick Fuentes, who had feuded with him for years. (It seems like a version of this theory may have found its way to the writer's room at Jimmy Kimmel Live!, drawing the attention of FCC chair Brendan Carr and leading ABC to take the show off the air
"indefinitely.")
Before and after Robinson was identified, a small but not insignificant number of people cheered Kirk's death on public social-media channels, while right-wing internet users responded with an unprecedented mass doxxing campaign, collecting these posts and trying to get the people behind them fired, kicked out of school, or deported.
This was all cheered on by the J.D. Vance - "call them out, and hell, call their employer," the vice-president said while hosting Charlie Kirk's show. It was gleefully justified as revenge for the left's years of"cancel culture," and was accordingly broad, sweeping up people who had merely criticized Kirk, as well as bystanders who had, in fact, said nothing at all. Elsewhere on social media, and especially on X, some users simply โ and comfortably โ called for direct retribution and declared that the time had come for a national divorce and civil war.
Animated, horrified, inspired, or merely unmoored into a state of disorientation by the shocking (and widely viewed) murder of a polarizing figure, a vast range of people revealed, on purpose or by accident, how alien to one another their normative experiences had become, online and off. Many surely wondered, from each edge of this nightmarish discourse โ although perhaps not as much as people who didn't enter the fray at all โ in what world are people posting this stuff? In death as in life, Charlie Kirk has left a lot of Americans struggling to understand how their fellow citizens could possibly say or believe the things they do.
I donโt mean to suggest equivalence here. An obsessive, metrics-driven influencer is a freakish cautionary tale, not a militant slipping into terroristic threats. You might question the character or judgment of someone who publicly celebrates the death of public figure they hate, or vents crudely or angrily about their political opponents after seeing a video of the murder of an online personality they love or identify with, but theyโre not participating in an organized online campaign, supported by elected officials, to have their enemies โthrown from civil society,โ kicked out of school, fired from their jobs, or stripped of their driverโs licenses.
Fearful elected Democrats have been far more careful than scattered liberals and leftists on social media, retreating to platitudes about how violence is bad and speech should never be met with violence, and have been harshly censured by Republicans in cases where they offered anything but praise for Kirk, as in the case of representative Ilhan Omar โ the right is, as usual, more aligned. The president claimed that the โradical leftโ was โdirectly responsibleโ for an act of โterrorismโ and pledged to go after people and organizations that โcontributed to this atrocity and to other political violence.โ His closest adviser, Stephen Miller, was more direct and spoke in the same voice as the burn-it-all-down contingent of Kirkโs online fandom. Kirkโs death was the result of โa vast domestic-terror movement,โ he told Vance on Kirkโs show, before promising to use โevery resource we have at the Department of Justice, Homeland Security, and throughout this government to identify, disrupt, dismantle, and destroy these networks and make America safe again for the American people.โ
Fascinating to watch people strain to assemble a "radicalized online" theory from thin and incomplete evidence as they post amid some of the most openly vengeful, bloodthirsty, "radical" social feeds in memory nymag.com/intelligence...
18.09.2025 13:16 โ ๐ 14 ๐ 1 ๐ฌ 1 ๐ 0
Large language models are cultural technologies. What might that mean?
Four different perspectives
I imagine there's some overlap but I'm talking about a pretty specific concept outlined here: www.programmablemutter.com/p/large-lang...
16.09.2025 18:55 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
what if Reddit except you always get a response and nobody gets mad at you
16.09.2025 15:19 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
lol
16.09.2025 14:49 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
vindicating I think for @himself.bsky.social, @alisongopnik.bsky.social, et al, particularly the drift toward GPT-as-reference-librarian usage
16.09.2025 14:36 โ ๐ 4 ๐ 1 ๐ฌ 1 ๐ 0
There are a few major caveats here. OpenAI's own researchers worked on the paper with Harvard economist David Deming under the auspices of the National Bureau of Economic Research (in other words, the company is comfortable with this paper's findings). Additionally, the research, which is mostly an attempt to classify and sort a large set of messages, was done substantially by OpenAl's own models, which both automated the process and, the researchers say, helped preserve user anonymity. "No one looked at the content of messages while conducting analysis for this paper," the researchers claim, although they did validate their automations against human analysis used in smaller, previous studies we've talked about here before. The researchers' methodology is both fascinating and could insufficiently but not entirely inaccurately be characterized as: "We asked ChatT
The picture that emerges from this data matches this thesis pretty closely: ChatGPT, for many of its users, is a way to access, remix, summarize, retrieve, and sometimes reproduce information and ideas that already exist in the world; in other words, they use this one tool much in the way that they previously engaged with the entire web - arguably the last great "cultural and social technology" - and through a similar routine of constant requests, consultations, and diversions. One doesn't get the feeling from this research that we're careening toward uncontrollable superintelligence, or even imminent invasion of the workforce by agentic Al bots, but it does suggest users are more than comfortable replacing and extending many of their current online interactions
- searching, browsing, and consulting with the ideas of others - with an ingratiating chatbot simulation.
OpenAI's "what is ChatGPT, anyway?" study is a little weird, sort of funny, but actually illuminating nymag.com/intelligence...
16.09.2025 14:36 โ ๐ 35 ๐ 10 ๐ฌ 1 ๐ 3
congratulations to all these great men of history, who will definitely be revered and celebrated for generations!
05.09.2025 14:23 โ ๐ 20 ๐ 4 ๐ฌ 1 ๐ 0
How Investors Think AI Will Actually Make Money
โLearn to codeโ is taking on new meaning in Silicon Valley.
helpful @jwherrman.bsky.social column on the value and business models of AI coding startups nymag.com/intelligence...
04.09.2025 17:52 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 0
god what a grim product
04.09.2025 14:17 โ ๐ 8 ๐ 0 ๐ฌ 0 ๐ 0
if "grandpa with fox on in the background" made you sad, wait until you meet "grandpa who watches 2000 daily reels of slop algorithmically targeted at his worst impulses and fears"
04.09.2025 14:15 โ ๐ 37 ๐ 2 ๐ฌ 1 ๐ 3
clear act of war against old people and their families www.theverge.com/news/769460/...
04.09.2025 14:04 โ ๐ 42 ๐ 5 ๐ฌ 1 ๐ 0
My tests of the AI grader "agent" get a mention in this article about how everything AI everywhere all the time is now an agent.
22.08.2025 18:56 โ ๐ 19 ๐ 5 ๐ฌ 0 ๐ 0