Jess Miers 🦝🦞

Jess Miers 🦝🦞

@jmiers230.bsky.social

Law Prof @AkronLaw | Computer Scientist | bot psychologist 🤖 | 1A 💬 / tech expert Priors: Google, Twitter (no, not X), Chamber of Progress jmiers@uakron.edu | Signal: j230.95 Currently writing about chatbots, speech, and suicide.

5,593 Followers 769 Following 4,065 Posts Joined Apr 2023
13 hours ago

It doesn't matter. Chamber of Progress took a hard line approach against age verification when I was there. There was no reason to suggest even a hint of support now, even with amendments. Doing so undermines their credibility on this issue now and going forward.

1 1 2 0
1 day ago

canon event along with lemon party and tubgirl

2 0 0 0
1 day ago

Like no discord can't be expected to fight a multinational age verification war.

you're a trillion dollar company wtfdym "free riding" ??

17 2 1 0
1 day ago

Something I heard a lot toward the end of my time in industry was "we are tired of the industry free riding on us" as an excuse to not fight online censorship and it's like

yeah, you're Google. What did you expect. You kinda built all this shit of course you are responsible for defending it.

25 6 1 0
1 day ago

Sometimes I think about how I got to experience pre-age verification Internet and how many of my students soon won't know that version of it at all.

23 6 5 0
1 day ago
Headline: Philly Bar Bans Anyone Under 25 After Kid Gives ID With Ben Franklin's Photo
4 0 0 0
1 day ago
Screenshot of a post from The Lunduke Journal (@LundukeJournal) on X.com.
Text reads:
“A ‘Progressive Tech Coalition’ (including Apple, Google, & Roblox) has abruptly changed stance on ‘Age Verification’ laws.
Up until now, the ‘Chamber of Progress’ has consistently opposed all laws seeking to implement age verification… but they have decided to support the Colorado ‘Age Verification for all Operating Systems’ law.
What makes this 180 degree change particularly interesting is that ‘Chamber of Progress’ publicly opposed last year’s California law… which the Colorado law (which they support) is based almost entirely upon.”

Those who stand for nothing fall for anything.

188 50 9 5
2 days ago

If you're curious about the current state of tech law and policy, I have spent the entire work day today commenting on "suicide kits" and AI-generated CSAM...

More thoughts on this one coming soon:
cdn.ca9.uscourts.gov/datastore/me...

13 2 1 0
3 days ago

ACADEMIC DISCOURSE IS BACK BABY!!!

9 0 1 0
3 days ago
Preview
A First Amendment right not to use AI for evil? Anthropic says the government retaliated after it refused to weaken Claude’s safeguards, raising new questions about AI and free speech.

The Anthropic v. DoD case has raised an important debate about Generative AI and the First Amendment. I argue of course outputs are speech. Others disagree. @enbrown.bsky.social captured my thoughts and the broader debate in her latest piece here:

reason.com/2026/03/11/a...

7 0 0 0
3 days ago

Such an honor to be included!! Thanks Elizabeth! This was great!

0 0 0 0
3 days ago
Preview
A First Amendment right not to use AI for evil? Anthropic says the government retaliated after it refused to weaken Claude’s safeguards, raising new questions about AI and free speech.

A First Amendment Right Not To Use AI for Evil? reason.com/2026/03/11/a...

Anthropic is suing the Trump admin over its tantrum-filled, retaliatory response to the company saying Claude wouldn't do mass domestic spying or autonomous weapons

Anthropic says the gov violated its "core 1A freedoms"

45 10 2 0
3 days ago

Ari: we'd never let the government come into a bookstore and tell them to rearrange the shelves in a way the government prefers.

Why would that be acceptable with software?

6 0 1 0
3 days ago

On Anthropic and the DOD, Ari makes it clear that the issue is in fact a speech issue.

Ari: I think many people are "bamboozled" by the newness of AI.

But of course LLMs generate speech, and of course the government can't interfere with that.

4 0 1 0
3 days ago
Akron Law Student Elizabeth Grossman (left) moderates a discussion with FIRE's Ari Cohn (right) at the University of Akron School of Law

We are honored to host THE @aricohn.com here at the University of Akron School of Law for our First Amendment Law Society event today.

Ari just explained the chilling effects of online age verification and that promises of safety and anonymity from the tech companies doing it are "horseshit."

31 4 2 0
3 days ago

YES! It's one of my favorite examples.

1 0 0 0
3 days ago

Banning HRT for trans people is not only immoral. It is dangerous. Post-Op trans women people do not naturally produce estrogen. They receive it through HRT. Estrogen prevents bone degradation (osteoporosis).

165 34 7 3
4 days ago

Random thought but a point / counter-point between you and me on this might make for an interesting lawfare (or elsewhere) discussion. It's relevant / timely and I think the way we're interpreting the application of these cases in the AI context is a useful thought exercise.

2 0 1 0
4 days ago

And THAT is my next article LOL

0 0 1 0
4 days ago

Threatening to invoke national security powers to penalize a company unless it changes those rules is the kind of government coercion to alter private editorial choices that Hurley/tornillo/moody forbid.

0 0 0 0
4 days ago

The editorial-discretion cases don’t turn on whether speech is public or private; they protect control over what messages an expressive system will produce.

Safety rules governing what an AI model will generate are editorial judgments about output. /1

0 0 2 0
4 days ago

Models are just another way to disseminate information to the masses. Developers are just modern publishers deciding what that information will be.

And the First Amendment absolutely, unquestionably, protects all of it.

3 1 2 0
4 days ago

The point is that we don't need Bernstein when we have established examples of publication activities we've protected under the First Amendment since we could disseminate information to the masses.

2 0 1 0
4 days ago

We would NEVER allow the government to waltz into the New York Times and tell them to reveal, much less *change* their editorial policies about what NYT deems publication-worthy.

We certainly shouldn't allow them to do it here. The AI-of-it-all doesn't change the speech result.

2 0 1 0
4 days ago

All of these decisions influence the kind of outputs the model ultimately provides.

Which is why the Trump administration is trying to meddle with them!!

DOD is effectively trying to force Anthropic to make different editorial decisions that reflect the views and goals of the Administration.

0 0 1 0
4 days ago

I mean that's ridiculous and stupid and bad for our information diets

but so is Fox News

and we certainly protect Fox News.

1 0 1 0
4 days ago
Preview
Grok 3 engineer admits manipulating responses about Musk and Trump

Take grok for example. Apparently Grok had some fine-tuning instructions that told the model to parrot whatever Musk was saying about trump whenever trump was mentioned by the user.
www.tortoisemedia.com/2025/02/25/g...

1 0 1 0
4 days ago

And don't get me started on the fine-tuning instructions. Like the dataset curation and training process, fine-tuning is also deeply subjective and editorial.

Devs literally write thousands of "if you see this kind of input, respond in this way" examples for their models to train on.

1 0 2 0
4 days ago

(or select different benchmarks!)

0 0 1 0
4 days ago

And if your model isn't meeting the benchmarks you selected, you go back to the drawing board: find better data, build the training "curriculum" (which requires even more intentional choices about what kinds of data should go first vs. later in the training, or which data should be repeated).

1 0 1 0