A. Feder Cooper's Avatar

A. Feder Cooper

@afedercooper.bsky.social

ML researcher, MSR + Stanford postdoc, future Yale professor https://afedercooper.info

367 Followers  |  199 Following  |  150 Posts  |  Joined: 27.06.2023  |  2.261

Latest posts by afedercooper.bsky.social on Bluesky

I’ll be speaking on the panel, then presenting the poster with several of my co-authors.

Joint w/ @marklemley.bsky.social @mbogen.bsky.social @nicolaspapernot.bsky.social @klyman.bsky.social @milesbrundage.bsky.social @jtlg.bsky.social @hannawallach.bsky.social @zephoria.bsky.social + many others

03.12.2025 20:57 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Co-led with @katherinelee.bsky.social, this cross-institution, cross-disciplinary collaboration details the challenges of using machine unlearning in generative AI to meet substantive goals in law and policy (with a particular focus on copyright, privacy, and safety contexts).

03.12.2025 20:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[NeurIPS '25] Our oral slot and poster session on "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research" are tomorrow, December 4! [https://arxiv.org/abs/2412.06966]

Oral: 3:30-4pm PST, Upper Level Ballroom 20AB

Poster 1307: 4:30:-7:30pm PST, Exhibit Hall C-E

03.12.2025 20:57 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Tutorial tomorrow at 1:30PM PST!

My talk slots will cover memorization + copying in models and their outputs, canonical extraction methods, and recent work with @marklemley.bsky.social and others on extracting pieces of memorized books from open-weight models.

arxiv.org/abs/2505.12546

01.12.2025 18:38 β€” πŸ‘ 9    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

I'm at NeurIPS & hiring for our pretraining safety team at OpenAI! Email me if you want to chat about making safer base models!

01.12.2025 06:03 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Comparison requires valid measurement: Rethinking attack success... In this position paper we argue that conclusions drawn about relative system safety or attack method efficacy via AI red teaming are often not supported by evidence provided by attack success rate...

Comparison requires valid measurement: Rethinking attack success rate comparisons in AI red teaming

Poster #1110 | Fri Dec 5, 4:30-7:30pm PST, Exhibit Hall C,D,E

openreview.net/forum?id=d7h...

30.11.2025 22:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research "Machine unlearning" is a popular proposed solution for mitigating the existence of content in an AI model that is problematic for legal or moral reasons, including privacy, copyright, safety, and mor...

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research

Oral | Thu Dec 4 3:30-4pm PST, Upper Level Ballroom 20AB

Poster #1307 | Thu Dec 4, 4:30:-7:30pm PST, Exhibit Hall C,D,E

arxiv.org/abs/2412.06966

30.11.2025 22:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Preview
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text Large language models (LLMs) are typically trained on enormous quantities of unlicensed text, a practice that has led to scrutiny due to possible intellectual property infringement and ethical concern...

The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text

Poster #102 | Fri Dec 5, 11am-2pm PST, Exhibit Hall C,D,E

arxiv.org/abs/2506.05209

30.11.2025 22:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Exploring the limits of strong membership inference attacks on large language models State-of-the-art membership inference attacks (MIAs) typically require training many reference models, making it difficult to scale these attacks to large pre-trained language models (LLMs). As a resu...

Exploring the limits of strong membership inference attacks on large language models

Poster #1300 | Fri Dec 5, 11am-2pm PST, Exhibit Hall C,D,E

arxiv.org/abs/2505.18773

30.11.2025 22:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Data Privacy, Memorization, and Legal Implications in Generative AI - NeurIPS 2025 Tutorial NeurIPS 2025 Tutorial on data privacy, memorization, and legal implications in generative AI: practical guidance at the intersection of ML, law, and policy.

Tutorial: Data Privacy, Memorization, & Legal Implications in Generative AI

Tue Dec 2, 1:30-4pm PST, Exhibit Hall F

w/ @pratyushmaini.bsky.social + Joe Gratz

memlaw-tutorial.github.io

30.11.2025 22:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 2

Excited to be at NeurIPS this week in San Diego! Please reach out (best over email) if you’d like to chat about privacy & security, scalable evals, and reliable ML systems.

I’ll be presenting a few papers/speaking at some events, please stop by! Will post details throughout the week (summary below)

30.11.2025 22:26 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Yale University, Institute for the Foundations of Data Science Job #AJO31114, Postdoc in Foundations of Data Science, Institute for the Foundations of Data Science, Yale University, New Haven, Connecticut, US

πŸ“£ Postdocs at Yale FDS! πŸ“£ Tremendous freedom to work on data science problems with faculty across campus, multi-year, great salary. Deadline 12/15. Spread the word! Application: academicjobsonline.org/ajo/jobs/31114 More about Yale FDS: fds.yale.edu

18.11.2025 03:54 β€” πŸ‘ 23    πŸ” 13    πŸ’¬ 0    πŸ“Œ 1

Just finished reading the GEMA v. OpenAI decision (slowly, my German isn't great). Looks like a not small part of the analysis tracked parts of arguments @jtlg.bsky.social and I made in 2024.

I don't have a well-formed response yet, but hopefully soon. (Main thought atm is a very unpolished "woah")

12.11.2025 21:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
42-O-14139-24-Endurteil.pdf

Today's decision in GEMA v. OpenAI by a German court holds that ChatGPT infringes copyright when it memorizes song lyrics. The opinion cites my paper with @afedercooper.bsky.social on memorization in generative models, and its analysis tracks ours.

drive.google.com/file/d/1dUaD...

12.11.2025 00:58 β€” πŸ‘ 33    πŸ” 16    πŸ’¬ 1    πŸ“Œ 2

Omg I cackled

05.11.2025 04:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Bill Ackman gotta be on the third draft of a tweet longer than Middlemarch right now

05.11.2025 02:52 β€” πŸ‘ 12431    πŸ” 1184    πŸ’¬ 226    πŸ“Œ 60

happy to dm you about it :)

01.11.2025 18:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I could see why someone not actively doing research in that subfield would very reasonably think that. But as someone who used to publish there, I’ll just say β€œlol”

01.11.2025 16:58 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I’m kinda known as a copyright person, but (even in memorization) I mainly study how to draw reliable conclusions from large-scale AI/ML systems. There’s a long spiel why, but today I feel defeated. 100 hours/week on this for 6 years, just to find out a parent treats Gemini in search as ground-truth

01.11.2025 16:44 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

what!! scholar counts the same paper more than once! that's insane! I guess i've noticed this and merged them, e.g., if I've changed the title in revisions. but i assumed once you did that they de-duped any citations. because you're telling them it's the same paper...

28.10.2025 02:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

lol

what irony that I often feel like the imposter, producing only 6-8 papers as a core contributor per year

27.10.2025 18:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I just feel particular dumb not to have considered this, seeing as I do a lot of work on manipulating ML models. This just would never have occurred to me as a thing that was worth doing.
This is just...so (likely usually) petty and stupid.

27.10.2025 18:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Omg what

27.10.2025 09:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

✨✨

23.10.2025 20:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
2026-CFP - ACM Symposium on Computer Science & Law 2026 Call for Papers 5th ACM Symposium on Computer Science and Law March 3-5, 2026 Berkeley, California The 5th ACM…

This is a really great community of researchers, and every accepted paper gets a generously long talk slot to present.

CFP: computersciencelaw.org/2026-2/2026-...

Main track deadline (archival and non-archival): September 30, AoE

26.09.2025 22:10 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
2026-CFP - ACM Symposium on Computer Science & Law 2026 Call for Papers 5th ACM Symposium on Computer Science and Law March 3-5, 2026 Berkeley, California The 5th ACM…

The NeurIPS position track didn't take a large number of extraordinary papers that surpassed the acceptance bar, limiting the acceptance rate to an unusually low 6%.

If you have a rejected paper at the intersection of ML and law, consider submitting to ACM CSLaw '26.

26.09.2025 22:10 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice We articulate fundamental mismatches between technical methods for machine unlearning in Generative AI, and documented aspirations for broader impact that these methods could have for law and policy. ...

Our paper "Machine Unlearning Doesn't Do What You Think" was accepted for presentation at NeurIPS

Congrats @afedercooper.bsky.social and @katherinelee.bsky.social, who led the effort

arxiv.org/abs/2412.06966

26.09.2025 18:37 β€” πŸ‘ 21    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

One more week to submit to CSLaw '26!!

24.09.2025 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

at least 100k with all the appendices πŸ™ƒ

18.09.2025 19:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI Copyright Lawsuits with Pam Samuelson | Scaling Laws

For an update on the state of play in the generative AI copyright cases, try this podcast: shows.acast.com/arbiters-of-...

16.09.2025 20:51 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@afedercooper is following 20 prominent accounts