It doesn't matter. Chamber of Progress took a hard line approach against age verification when I was there. There was no reason to suggest even a hint of support now, even with amendments. Doing so undermines their credibility on this issue now and going forward.
canon event along with lemon party and tubgirl
Like no discord can't be expected to fight a multinational age verification war.
you're a trillion dollar company wtfdym "free riding" ??
Something I heard a lot toward the end of my time in industry was "we are tired of the industry free riding on us" as an excuse to not fight online censorship and it's like
yeah, you're Google. What did you expect. You kinda built all this shit of course you are responsible for defending it.
Sometimes I think about how I got to experience pre-age verification Internet and how many of my students soon won't know that version of it at all.
Those who stand for nothing fall for anything.
If you're curious about the current state of tech law and policy, I have spent the entire work day today commenting on "suicide kits" and AI-generated CSAM...
More thoughts on this one coming soon:
cdn.ca9.uscourts.gov/datastore/me...
ACADEMIC DISCOURSE IS BACK BABY!!!
The Anthropic v. DoD case has raised an important debate about Generative AI and the First Amendment. I argue of course outputs are speech. Others disagree. @enbrown.bsky.social captured my thoughts and the broader debate in her latest piece here:
reason.com/2026/03/11/a...
Such an honor to be included!! Thanks Elizabeth! This was great!
A First Amendment Right Not To Use AI for Evil? reason.com/2026/03/11/a...
Anthropic is suing the Trump admin over its tantrum-filled, retaliatory response to the company saying Claude wouldn't do mass domestic spying or autonomous weapons
Anthropic says the gov violated its "core 1A freedoms"
Ari: we'd never let the government come into a bookstore and tell them to rearrange the shelves in a way the government prefers.
Why would that be acceptable with software?
On Anthropic and the DOD, Ari makes it clear that the issue is in fact a speech issue.
Ari: I think many people are "bamboozled" by the newness of AI.
But of course LLMs generate speech, and of course the government can't interfere with that.
We are honored to host THE @aricohn.com here at the University of Akron School of Law for our First Amendment Law Society event today.
Ari just explained the chilling effects of online age verification and that promises of safety and anonymity from the tech companies doing it are "horseshit."
YES! It's one of my favorite examples.
Banning HRT for trans people is not only immoral. It is dangerous. Post-Op trans women people do not naturally produce estrogen. They receive it through HRT. Estrogen prevents bone degradation (osteoporosis).
Random thought but a point / counter-point between you and me on this might make for an interesting lawfare (or elsewhere) discussion. It's relevant / timely and I think the way we're interpreting the application of these cases in the AI context is a useful thought exercise.
And THAT is my next article LOL
Threatening to invoke national security powers to penalize a company unless it changes those rules is the kind of government coercion to alter private editorial choices that Hurley/tornillo/moody forbid.
The editorial-discretion cases don’t turn on whether speech is public or private; they protect control over what messages an expressive system will produce.
Safety rules governing what an AI model will generate are editorial judgments about output. /1
Models are just another way to disseminate information to the masses. Developers are just modern publishers deciding what that information will be.
And the First Amendment absolutely, unquestionably, protects all of it.
The point is that we don't need Bernstein when we have established examples of publication activities we've protected under the First Amendment since we could disseminate information to the masses.
We would NEVER allow the government to waltz into the New York Times and tell them to reveal, much less *change* their editorial policies about what NYT deems publication-worthy.
We certainly shouldn't allow them to do it here. The AI-of-it-all doesn't change the speech result.
All of these decisions influence the kind of outputs the model ultimately provides.
Which is why the Trump administration is trying to meddle with them!!
DOD is effectively trying to force Anthropic to make different editorial decisions that reflect the views and goals of the Administration.
I mean that's ridiculous and stupid and bad for our information diets
but so is Fox News
and we certainly protect Fox News.
Take grok for example. Apparently Grok had some fine-tuning instructions that told the model to parrot whatever Musk was saying about trump whenever trump was mentioned by the user.
www.tortoisemedia.com/2025/02/25/g...
And don't get me started on the fine-tuning instructions. Like the dataset curation and training process, fine-tuning is also deeply subjective and editorial.
Devs literally write thousands of "if you see this kind of input, respond in this way" examples for their models to train on.
(or select different benchmarks!)
And if your model isn't meeting the benchmarks you selected, you go back to the drawing board: find better data, build the training "curriculum" (which requires even more intentional choices about what kinds of data should go first vs. later in the training, or which data should be repeated).