Danny Wilf-Townsend's Avatar

Danny Wilf-Townsend

@dannywt.bsky.social

Associate Professor of Law at Georgetown Law thinking, writing, and teaching about civil procedure, consumer protection, and AI. Blog: https://www.wilftownsend.net/ Academic papers: https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2491047

659 Followers  |  429 Following  |  82 Posts  |  Joined: 02.10.2023  |  2.0011

Latest posts by dannywt.bsky.social on Bluesky

A thoughtful thread on the Netflix / Warner Bros merger. I think the points about consumer preferences are particularly important β€” it’s sometimes hard, but often important, to tease apart when law and policy arguments are inflected by different preferences about product features

09.12.2025 14:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Opinion | Everybody in Hollywood Secretly Hates Netflix. So Now What?

The hate is real, and, contra the headline, it is not so secret. And the op-eds opposing the Netflix/WB merger have begun. But, as an antitrust lawyer, I think the merger is likely to be pro-competitive and good for consumers. I'll explain why. 1/ www.nytimes.com/2025/12/07/o...

07.12.2025 17:35 β€” πŸ‘ 83    πŸ” 22    πŸ’¬ 11    πŸ“Œ 20

An update for Sonnet 4.5, released last week: it scored 60.2% on my final exam (with extended thinking on, 54.4% without it). That's a big step up (~20 percentage points) from Opus 4.1's scores, and puts Sonnet 4.5 close to, if slightly behind, other lead models. On a human curve, that's ~ an A-/B+

06.10.2025 13:32 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Oh, also, from the parochial law professor standpoint (i.e., the most important standpoint) it makes "looking for hallucinations" a less reliable form of trying to monitor student AI use on exams or papers.

03.10.2025 14:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

...has gone up. Certainly not to the point where I would recommend relying on AI for legal advice (or to write your briefs), but the size of the change does seem notable for at least those (and probably other) reasons.

03.10.2025 14:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

...a few thoughts: (1) for practitioners using AI, I would think that fewer hallucinations makes it faster and cheaper to review/check/edit AI-generated outputs. And (2) for non-experts using AI, who aren't editing but just reading (or even relying on) answers, the quality of those answers..

03.10.2025 14:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I wouldn't draw a big conclusion specifically from this exercise; but it is consistent with my experience that hallucinations in answering legal questions seem way down in general now compared to, e.g., a year ago. In terms of the implications of that broader fact (if it is a fact)...

03.10.2025 14:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One other note: across the five exam answers and dozens of answer evaluations generated here, I did not notice a single hallucination. This test wasn't designed to measure hallucination rates, but it's consistent with the general sense that they have dropped significantly

03.10.2025 13:22 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Job Opportunities | Office of the Illinois Attorney General

Our office is again hiring one or more attorneys for a one-year fellowship to work directly with the Illinois Solicitor General and her team, beginning in August/September 2026.

www.governmentjobs.com/careers/ilag...

01.10.2025 13:44 β€” πŸ‘ 15    πŸ” 17    πŸ’¬ 2    πŸ“Œ 3
Post image

Overall, GPT-5-Pro was good enough to use for my (informal) approach hereβ€”it was both internally consistent and looked good in accuracy spot checks. Its grades show some models scoring in the A- to A range, consistent with what others have found, too.

01.10.2025 13:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Text describing consistency rates in human graders

Text describing consistency rates in human graders

It turns out that some models are deeply inaccurate, and some are frequently inconsistent, but a few are reasonably consistent and accurate. And along the way, I learned that human graders are sometimes less consistent than we might hope.

01.10.2025 13:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The goal here wasn't to use them to grade student workβ€”something I would not recommend. It's instead to see if they can be used to automate the evaluation process of other language models: can we use LLMs to get a sense of different models' relative capacities on legal questions?

01.10.2025 13:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
ChatGPT takesβ€”and gradesβ€”my law school exam The latest round of informal testing of large language models on legal questions

For my latest round of informal tests of large language models, I looked at how good different models are at taking a law school examβ€”and also whether they are capable of grading exam answers in a consistent and reasonably accurate way. 🧡
www.wilftownsend.net/p/chatgpt-ta...

01.10.2025 13:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 2
Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms AI regulations are popping up around the world, and they mostly involve ex-ante risk assessment and mitigating those risks. But even with careful risk assessmen

The review of "The Deletion Remedy" also discusses Christina Lee's "Beyond Algorithmic Disgorgement," papers.ssrn.com/sol3/papers..... Christina is on the job market this year, and if I were on a hiring committee I would definitely be taking a look.

29.09.2025 13:50 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Should drafters be penalized for clearly unenforceable terms? - Courts Law Daniel Wilf-Townsend, Deterring Unenforceable Terms, 111 Va. L. Rev. __ (forthcoming 2025),Β available at SSRN (June 6, 2024).Maureen CarrollMost of us (if not all) have entered a contract with one or ...

It was very nice to have two of my recent articles featured in JOTWELL reviews this monthβ€”Maureen Carroll on "Deterring Unenforceable Terms," courtslaw.jotwell.com/should-draft...
and @margotkaminski.bsky.social on "The Deletion Remedy" cyber.jotwell.com/ai-disgorgem...

29.09.2025 13:46 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Indicting based on sandwich type could lead to quite a pickle. Let's hope this jury's not on a roll.

28.08.2025 13:57 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Me too! They must be targeting proceduralists. Probably due to our lax morals.

22.08.2025 13:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
No, Generative AI Didn’t Just Kill the Attorney-Client Privilege Opinion: Georgetown Law professor Jonah Perlin says using third-party technology doesn't categorically waive the attorney-client privilege.

A nice quick read from my colleague @JonahPerlin about an issue that I see a lot of people oversimplifying: whether an attorney's use of a generative AI tool waives privilege. This is an area where I'm very interested to see how the law develops. news.bloomberglaw.com/us-law-week/...

12.08.2025 19:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.

21.07.2025 01:48 β€” πŸ‘ 2970    πŸ” 472    πŸ’¬ 43    πŸ“Œ 89

And thank you to @wertwhile.bsky.social for the shoutout and discussion of my work!

17.07.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And I completely agree with what @wertwhile.bsky.social and @weisenthal.bsky.social say about OpenAI's o3 being the model to focus onβ€”lots of people are forming impressions about AI capabilities based on older or less powerful tools, and aren't seeing the current level of capabilities as a result.

17.07.2025 17:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Testing generative AI on legal questionsβ€”May 2025 update The latest round of my informal testing

Finally, the work of mine that is discussed a bit is this informal testing of AI models on legal questions. The most recent post is here: www.wilftownsend.net/p/testing-ge...

17.07.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Judicial Economy in the Age of AI Individuals do not vindicate the majority of their legal claims because of access to justice barriers. This entrenched state of affairs is now facing a disrupti

On the issue of AI increasing numbers of lawsuits and a "Jevons paradox" for litigation, I would recommend @arbel.bsky.social's work here: papers.ssrn.com/sol3/papers....
and Henry Thompson has some interesting thoughts about these dynamics as well: henryathompson.com/s/Thompson-A...

17.07.2025 17:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Legal Tech, Civil Procedure, and the Future of Adversarialism By David Freeman Engstrom and Jonah B. Gelbach, Published on 01/01/21

First, on using AI to predict damages / outcomes, check out work by David Freeman Engstrom and @gelbach.bsky.social, which discusses some areas this is happening (ctrl+f for "Walmart" to find relevant sections):
scholarship.law.upenn.edu/penn_law_rev...
scholarship.law.upenn.edu/jcl/vol23/is...

17.07.2025 17:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

A very pleasant surprise to listen to one of my favorite podcasts and hear my own work being discussed. And it's an excellent episode and overview for anyone thinking of AI's effects on the legal profession. Some thoughts / suggestions below for anyone who wants further reading:

17.07.2025 17:02 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

What an interesting question β€” cool study.

26.06.2025 13:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Judge Alsup has the first true opinion on fair use for generative AI in Bartz v. Anthropic. He holds that AI training is fair use, and so is buying books to scan them, but that downloading pirated copies of books for an internal training-data database is not fair use. 🧡

24.06.2025 14:29 β€” πŸ‘ 28    πŸ” 16    πŸ’¬ 1    πŸ“Œ 1

I think this is one of the more common mistakes I see with people trying AIβ€”the idea that if you go to a free chatbot, quickly run a question by it, and it does a bad job, then you've learned that AI cannot do a good job on that question.

17.06.2025 13:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
US judge rules health insurers, MultiPlan must face price-fixing lawsuits A U.S. judge on Tuesday said healthcare providers can pursue claims that technology provider MultiPlan and a group of insurers conspired to underpay them billions of dollars in reimbursements for out-of-network health services.

Significant ruling in one of the big algorithmic price-fixing lawsuits going on right now: www.reuters.com/legal/govern...

16.06.2025 14:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@dannywt is following 20 prominent accounts