Only a couple of days after my last post, vibe hacking in full force.
26.02.2026 14:44 β π 2 π 0 π¬ 0 π 0Only a couple of days after my last post, vibe hacking in full force.
26.02.2026 14:44 β π 2 π 0 π¬ 0 π 0
Only a couple of days after my last post, vibe hacking in full force.
www.bloomberg.com/news/article...
Following by a panel on GenAI, Agentic AI, Law, and CS (1:15-2:00pm ET) with @peterhenderson.bsky.social (Princeton) and Georgios Piliouras (Google DeepMind)
Spotlight Talks (2:30pm-4:00pm) by
@aloni-bologna.bsky.social (UChicago), Rebecca Wexler (Columbia), and @jubaz.bsky.social (Georgia Tech)
Unfortunately, the scale of the problem makes it challenging. Even if we freeze at Codex-5.3/Opus-4.6-level capabilities attackers can probably scaffold them to pretty easily identify tons of vulnerabilities.
24.02.2026 03:43 β π 1 π 0 π¬ 0 π 0As models discover more exploits, we may need something like a responsible disclosure period for major jumps in cyber capabilities. Before release, trusted defenders get privileged access to the more capable model. Together, they scan for vulnerabilities at scale & notify as many as possible.
24.02.2026 03:43 β π 1 π 0 π¬ 1 π 0
Missing from the headline: "using Claude Code."
Vibe hacking is already a thing. I've been saying this for a while, but no model-level safeguards will prevent it entirely. What they can do is slow it down enough for us to put societal-level safeguards in place.
www.popsci.com/technology/r...
That was fast.
31.01.2026 03:09 β π 2 π 0 π¬ 0 π 0New copyright law "hypothetical" just dropped.
29.01.2026 23:16 β π 2 π 0 π¬ 0 π 1Warner Music and Udio settle their copyright case, agree to collaborate on "new song creation service that will allow users to remix tunes by established artists." Expect more such settlements as copyright holders look to leverage AI to boost revenue!
20.11.2025 16:52 β π 4 π 0 π¬ 1 π 0Weβve been pushing hard on AI for public good. One example: partnering with Courtlistener to launch accessible legal semantic search! Many more cool AI projects coming soon from my group aimed at improving access to justice, often spearheaded by @dominsta.bsky.social !
07.11.2025 02:15 β π 16 π 2 π¬ 0 π 0Sora2 is speedrunning my AI law class. We covered issues with copyrighted characters in week 2, and right of publicity claims in week 3. Georgia has a postmortem right of publicity claim. Some states don't (e.g., famous Marilyn Monroe estate battle).
17.10.2025 20:06 β π 4 π 0 π¬ 0 π 1
How Gemini Compute Use Agent feels about the "Choose Chrome" popup.
gemini.browserbase.com
Why might AI companies take on larger copyright litigation risks? If they estimate AGI-scale impacts are 2-3 yrs out, litigation will lag that long. By then, the bet might be: govts step in (too big to fail), rightsholders reliant on AI, fair use prevails, or have $$$ to settle.
01.10.2025 21:56 β π 1 π 0 π¬ 0 π 0
Quick take: Are open-weight AI models getting a fair shake in evals? A few thoughts on comparing systems-to-models, sparked by Anthropicβs recent postmortem.
Check it our most recent post: www.ailawpolicy.com/p/quick-take...
GPT-5-codex just ``git reset --hard'' ongoing changes in a repo, saying "I panicked!"
h/t Zeyu Shen @ Princeton
β’οΈ Can an AI model be "born secret" when it comes to nuclear and radiological risks? What powers does the Atomic Energy Act give the federal government over frontier models?
It might be more than you think! And may preempt parts of state regs. Check out our post: www.ailawpolicy.com/p/ai-born-se...
Some quick thoughts on the recent copyright litigation developments:
"Anthropic Settles Its Copyright Litigationβand Why That Was the Right Move"
π www.ailawpolicy.com/p/anthropic-...
Annnnnndddd Judge Alsup just rejected the settlement. Still some time to fix it. Rejection was mostly on the grounds that the class was under-specified (no final list of works, no opt-out/notification mechanism solidified).
news.bloomberglaw.com/ip-law/anthr...
π‘New on the CITP Blog: "Statutory Construction & Interpretation for AI" > What if an LLM concludes a user's behavior is βegregiously immoral" -- & contacts authorities?
CITP researchers with Prof @peterhenderson.bsky.social's
POLARIS Lab provide a possible explanation.ππ
The terms of Anthropic's settlement w/book authors just came out.
π°$1.5B to authors in libgen (Books3 corpus)!
Interestingly, this is ~$3k per book, close to the terms that HarperCollins allegedly gave to authors for their books ($2.5k). Consensus price forming?
Work with amazing folks: Lucy He, Nimra Nadeem, Michel Liao, Howard Chen, Danqi Chen, & Mariano-Florentino CuΓ©llar @carnegieendowment.org
05.09.2025 13:59 β π 0 π 0 π¬ 0 π 0
Basically, if weβre going to take model specs/constitutional AI seriously, we need to optimize rules and build out surrounding consistency-enhancing structures, paralleling the legal system.
Let's build better natural language laws and law-following AI together! If interested, reach out!
Obviously, lots more to do in this space! I'm super excited about this direction and the forthcoming work that we're building out.
05.09.2025 13:57 β π 0 π 0 π¬ 1 π 03οΈβ£ These computational tools, we think, can also be applied to positive models of the legal system, something that weβre tackling. More on this soon!
05.09.2025 13:57 β π 0 π 0 π¬ 1 π 02οΈβ£ We leverage interpretive constraints or ambiguity to induce more consistent interpretations and debug laws for AI. These computational tools allow us to not only build more rigorous laws for AI, but adds a layer of visibility on what can go wrong, ex ante.
05.09.2025 13:57 β π 0 π 0 π¬ 1 π 0
A few quick takeaways below, but Iβll drop more findings soon on this dense paper:
1οΈβ£ Given the same set of rules, models will interpret scenarios wildly differently. This gives us a mechanism to quantify interpretive ambiguity.
We model a space of reasonable interpreters and then modify rules, or add interpretive constraints, to reduce the entropy of the distribution.
05.09.2025 13:57 β π 0 π 0 π¬ 1 π 0
Check out our new work, Statutory Construction and Interpretation for Artificial Intelligence, doing exactly this!
Paper: arxiv.org/abs/2509.01186
Policy Brief: www.polarislab.org/briefs/Statu...
Blog: www.polarislab.org#/blog/statut...
Wonder why Claude decided to report users to the authorities? It might be because its constitution says Claude should choose responses in the long-term interest of humanity!
But what if we could leverage computational and legal tools to "debug" or "lint" AI rules/laws for ambiguity?
π§΅!