A thoughtful thread on the Netflix / Warner Bros merger. I think the points about consumer preferences are particularly important β itβs sometimes hard, but often important, to tease apart when law and policy arguments are inflected by different preferences about product features
09.12.2025 14:31 β π 1 π 0 π¬ 0 π 0
Opinion | Everybody in Hollywood Secretly Hates Netflix. So Now What?
The hate is real, and, contra the headline, it is not so secret. And the op-eds opposing the Netflix/WB merger have begun. But, as an antitrust lawyer, I think the merger is likely to be pro-competitive and good for consumers. I'll explain why. 1/ www.nytimes.com/2025/12/07/o...
07.12.2025 17:35 β π 83 π 22 π¬ 11 π 20
An update for Sonnet 4.5, released last week: it scored 60.2% on my final exam (with extended thinking on, 54.4% without it). That's a big step up (~20 percentage points) from Opus 4.1's scores, and puts Sonnet 4.5 close to, if slightly behind, other lead models. On a human curve, that's ~ an A-/B+
06.10.2025 13:32 β π 3 π 0 π¬ 0 π 0
Oh, also, from the parochial law professor standpoint (i.e., the most important standpoint) it makes "looking for hallucinations" a less reliable form of trying to monitor student AI use on exams or papers.
03.10.2025 14:52 β π 1 π 0 π¬ 0 π 0
...has gone up. Certainly not to the point where I would recommend relying on AI for legal advice (or to write your briefs), but the size of the change does seem notable for at least those (and probably other) reasons.
03.10.2025 14:51 β π 1 π 0 π¬ 1 π 0
...a few thoughts: (1) for practitioners using AI, I would think that fewer hallucinations makes it faster and cheaper to review/check/edit AI-generated outputs. And (2) for non-experts using AI, who aren't editing but just reading (or even relying on) answers, the quality of those answers..
03.10.2025 14:51 β π 1 π 0 π¬ 1 π 0
I wouldn't draw a big conclusion specifically from this exercise; but it is consistent with my experience that hallucinations in answering legal questions seem way down in general now compared to, e.g., a year ago. In terms of the implications of that broader fact (if it is a fact)...
03.10.2025 14:51 β π 1 π 0 π¬ 1 π 0
One other note: across the five exam answers and dozens of answer evaluations generated here, I did not notice a single hallucination. This test wasn't designed to measure hallucination rates, but it's consistent with the general sense that they have dropped significantly
03.10.2025 13:22 β π 2 π 0 π¬ 1 π 0
Job Opportunities | Office of the Illinois Attorney General
Our office is again hiring one or more attorneys for a one-year fellowship to work directly with the Illinois Solicitor General and her team, beginning in August/September 2026.
www.governmentjobs.com/careers/ilag...
01.10.2025 13:44 β π 15 π 17 π¬ 2 π 3
Overall, GPT-5-Pro was good enough to use for my (informal) approach hereβit was both internally consistent and looked good in accuracy spot checks. Its grades show some models scoring in the A- to A range, consistent with what others have found, too.
01.10.2025 13:57 β π 0 π 0 π¬ 0 π 0
Text describing consistency rates in human graders
It turns out that some models are deeply inaccurate, and some are frequently inconsistent, but a few are reasonably consistent and accurate. And along the way, I learned that human graders are sometimes less consistent than we might hope.
01.10.2025 13:57 β π 0 π 0 π¬ 1 π 0
The goal here wasn't to use them to grade student workβsomething I would not recommend. It's instead to see if they can be used to automate the evaluation process of other language models: can we use LLMs to get a sense of different models' relative capacities on legal questions?
01.10.2025 13:57 β π 0 π 0 π¬ 1 π 0
ChatGPT takesβand gradesβmy law school exam
The latest round of informal testing of large language models on legal questions
For my latest round of informal tests of large language models, I looked at how good different models are at taking a law school examβand also whether they are capable of grading exam answers in a consistent and reasonably accurate way. π§΅
www.wilftownsend.net/p/chatgpt-ta...
01.10.2025 13:57 β π 1 π 0 π¬ 1 π 2
Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms
AI regulations are popping up around the world, and they mostly involve ex-ante risk assessment and mitigating those risks. But even with careful risk assessmen
The review of "The Deletion Remedy" also discusses Christina Lee's "Beyond Algorithmic Disgorgement," papers.ssrn.com/sol3/papers..... Christina is on the job market this year, and if I were on a hiring committee I would definitely be taking a look.
29.09.2025 13:50 β π 2 π 0 π¬ 0 π 0
Indicting based on sandwich type could lead to quite a pickle. Let's hope this jury's not on a roll.
28.08.2025 13:57 β π 6 π 1 π¬ 1 π 0
Me too! They must be targeting proceduralists. Probably due to our lax morals.
22.08.2025 13:30 β π 1 π 0 π¬ 1 π 0
No, Generative AI Didnβt Just Kill the Attorney-Client Privilege
Opinion: Georgetown Law professor Jonah Perlin says using third-party technology doesn't categorically waive the attorney-client privilege.
A nice quick read from my colleague @JonahPerlin about an issue that I see a lot of people oversimplifying: whether an attorney's use of a generative AI tool waives privilege. This is an area where I'm very interested to see how the law develops. news.bloomberglaw.com/us-law-week/...
12.08.2025 19:12 β π 2 π 0 π¬ 0 π 0
In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.
21.07.2025 01:48 β π 2970 π 472 π¬ 43 π 89
And thank you to @wertwhile.bsky.social for the shoutout and discussion of my work!
17.07.2025 17:02 β π 0 π 0 π¬ 0 π 0
And I completely agree with what @wertwhile.bsky.social and @weisenthal.bsky.social say about OpenAI's o3 being the model to focus onβlots of people are forming impressions about AI capabilities based on older or less powerful tools, and aren't seeing the current level of capabilities as a result.
17.07.2025 17:02 β π 1 π 0 π¬ 2 π 0
Testing generative AI on legal questionsβMay 2025 update
The latest round of my informal testing
Finally, the work of mine that is discussed a bit is this informal testing of AI models on legal questions. The most recent post is here: www.wilftownsend.net/p/testing-ge...
17.07.2025 17:02 β π 0 π 0 π¬ 1 π 0
Judicial Economy in the Age of AI
Individuals do not vindicate the majority of their legal claims because of access to justice barriers. This entrenched state of affairs is now facing a disrupti
On the issue of AI increasing numbers of lawsuits and a "Jevons paradox" for litigation, I would recommend @arbel.bsky.social's work here: papers.ssrn.com/sol3/papers....
and Henry Thompson has some interesting thoughts about these dynamics as well: henryathompson.com/s/Thompson-A...
17.07.2025 17:02 β π 1 π 0 π¬ 1 π 0
Legal Tech, Civil Procedure, and the Future of Adversarialism
By David Freeman Engstrom and Jonah B. Gelbach, Published on 01/01/21
First, on using AI to predict damages / outcomes, check out work by David Freeman Engstrom and @gelbach.bsky.social, which discusses some areas this is happening (ctrl+f for "Walmart" to find relevant sections):
scholarship.law.upenn.edu/penn_law_rev...
scholarship.law.upenn.edu/jcl/vol23/is...
17.07.2025 17:02 β π 2 π 0 π¬ 2 π 0
A very pleasant surprise to listen to one of my favorite podcasts and hear my own work being discussed. And it's an excellent episode and overview for anyone thinking of AI's effects on the legal profession. Some thoughts / suggestions below for anyone who wants further reading:
17.07.2025 17:02 β π 6 π 1 π¬ 1 π 0
What an interesting question β cool study.
26.06.2025 13:58 β π 2 π 0 π¬ 1 π 0
Judge Alsup has the first true opinion on fair use for generative AI in Bartz v. Anthropic. He holds that AI training is fair use, and so is buying books to scan them, but that downloading pirated copies of books for an internal training-data database is not fair use. π§΅
24.06.2025 14:29 β π 28 π 16 π¬ 1 π 1
I think this is one of the more common mistakes I see with people trying AIβthe idea that if you go to a free chatbot, quickly run a question by it, and it does a bad job, then you've learned that AI cannot do a good job on that question.
17.06.2025 13:45 β π 3 π 0 π¬ 0 π 0
Progressive institutionalist with an interest in modernizing Congress and strengthening our democracy. Bluesky is my penance for working at an org that once encouraged Congress to tweet.
Like what you see? More at https://firstbranchforecast.substack.com/
Professor of Law, University of OsnabrΓΌck & Affiliated Fellow, Yale Information Society Project. Digital markets, platform regulation, consumer law.
Civil rights attorney. COYS.
Law professor at the American University Washington College of Law (but all views expressed here are my own); author of Fintech Dystopia and Driverless Finance; mythbusting crypto, AI, and other fintech
Internet serial: fintechdystopia.com
Professor @ UVA Law School, writing about discrimination law and theory, bribery and corruption.
I'm a newsletter (and sometimes other stuff) about the internet (and sometimes other stuff)
We are the non-profit host of RECAP, CourtListener, and the Big Cases bots. We use technology and advocacy to make the legal sector better.
https://free.law | https://free.law/recap/ | https://courtlistener.com | https://bots.law
Director, Center for Tech Responsibility@Brown. FAccT OG. AI Bill of Rights coauthor. Former tech advisor to President Biden @WHOSTP. He/him/his. Posts my own.
Director, Advanced Analytics at Unity Health Toronto. So early to Blue Sky they don't even allow underscores. I like beer, data, and multivariate blah-blahs. 2x Guinness world record holder in nonsense (5 person half and full marathon)
AI @ OpenAI, Tesla, Stanford
Researcher at Google and CIFAR Fellow, working on the intersection of machine learning and neuroscience in MontrΓ©al (academic affiliations: @mcgill.ca and @mila-quebec.bsky.social).
Professor of HCII and LTI at Carnegie Mellon School of Computer Science.
jeffreybigham.com
Chair, Computational BIology and Medicine Program, Princess Margaret Cancer Centre, University Health Network.
Associate Professor, Medical Biophysics, University of Toronto.
Disclosures: https://github.com/michaelmhoffman/disclosure/
Stanford Linguistics and Computer Science. Director, Stanford AI Lab. Founder of @stanfordnlp.bsky.social . #NLP https://nlp.stanford.edu/~manning/
Recently a principal scientist at Google DeepMind. Joining Anthropic. Most (in)famous for inventing diffusion models. AI + physics + neuroscience + dynamical systems.
Assistant Prof. of CS at Johns Hopkins
Visiting Scientist at Abridge AI
Causality & Machine Learning in Healthcare
Prev: PhD at MIT, Postdoc at CMU
Research Director, Founding Faculty, Canada CIFAR AI Chair @VectorInst.
Full Prof @UofT - Statistics and Computer Sci. (x-appt) danroy.org
I study assumption-free prediction and decision making under uncertainty, with inference emerging from optimality.
We explore and catalog the history of technophobia and moral panic.
Our newsletter: http://newsletter.pessimistsarchive.org
law librarian, legal writing prof, legal tech skeptic/addict in Columbus OH. Writing & researching about the ways tech enables access to information. I enjoy bikes, books, and birds. she/her. join a union.