Very excited to be part of this new AI Institute that is being led by Ellie Pavlick @brown.edu and to be able to work with so many experts, including @datasociety.bsky.social
www.brown.edu/news/2025-07...
@michelleding.bsky.social
CS PhD Researcher @ Brown, AI Governance & Sociotechnical Computing. She/Her. https://michelle-ding.github.io/
Very excited to be part of this new AI Institute that is being led by Ellie Pavlick @brown.edu and to be able to work with so many experts, including @datasociety.bsky.social
www.brown.edu/news/2025-07...
A poster for the paper "Position: Strong Consumer Protection is an Inalienable Defense for AI Safety in the United States"
I'll be presenting a position paper about consumer protection and AI in the US at ICML. I have a surprisingly optimistic take: our legal structures are stronger than I anticipated when I went to work on this issue in Congress.
Is everything broken rn? Yes. Will it stay broken? That's on us.
With their 'Sovereignty as a Service' offerings, tech companies are encouraging the illusion of a race for sovereign control of AI while being the true powers behind the scenes, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
07.07.2025 13:07 β π 13 π 11 π¬ 0 π 6Very excited to see this piece out in @techpolicypress.bsky.social today. This was written together with @r-jy.bsky.social and Kate Elizabeth Creasey (a historian here at Brown), and calls out what we think is a scary and interesting rhetorical shift.
www.techpolicy.press/sovereignty-...
So the EU AI Act passed. Companies have to comply. AI regulation is here to stay. Right? Right?
FAccT 2025 paper with @r-jy.bsky.social and Bill Marino (not on bsky) π incoming! 1/n
arxiv.org/abs/2506.01931
Excited to present "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" at #CHI2025 tomorrow(today)!
π Tue, 29 Apr | 9:48β10:00 AM JST (Mon, 28 Apr | 8:48β9:00 PM ET)
π G401 (Pacifico North 4F)
π dl.acm.org/doi/10.1145/...
all welcomes are good welcomes even with a 4 year delay π
26.04.2025 16:29 β π 2 π 0 π¬ 0 π 0lol
26.04.2025 16:28 β π 0 π 0 π¬ 0 π 0Also...super grateful & happy to be continuing my research journey at Brown in a CS PhD under @geomblog.bsky.social and @harinisuresh.bsky.social π±Many papers to come (?)
25.04.2025 19:14 β π 3 π 0 π¬ 2 π 0yay!!
25.04.2025 19:04 β π 0 π 0 π¬ 0 π 0@michelleding.bsky.social has been doing amazing work laying out the complex landscape of "deepfake porn" and distilling the unique challenges in governing it. We hope this work informs future AI governance efforts to address the severe harms of this content - reach out to us to chat more!
25.04.2025 18:42 β π 3 π 1 π¬ 0 π 0Any synthetic content governance framework or method that aims to target AIG-NCII of adults should also consider their applicability to the malicious technical ecosystem and the three limitations we point out above. More during our presentation at chi-staig.github.io!
25.04.2025 18:17 β π 2 π 0 π¬ 1 π 03. Adult AIG-NCII specific methods often focus on red teaming general purpose image generation models eg Stable Diffusion. While this is also important, there is an erroneous assumption of a "trustworthy technology" and "malicious users" when in fact the tech itself here is maliciously designed
25.04.2025 18:08 β π 2 π 0 π¬ 1 π 02. AIG-NCII prevention methods often conflate child sexual abuse materials (CSAM) with NCII of adults. But methods for CSAM (that often work with law enforcement databases) won't be as effective for adults due to different legal protections
25.04.2025 18:04 β π 2 π 0 π¬ 1 π 01. Transparency methods are very common in synthetic content governance frameworks but they are not enough. DF porn is not the same as other eg. political deepfakes bc human detectable images are still harmful. In fact, many of the deepfake creators label images as "fake" to avoid accountability
25.04.2025 18:02 β π 2 π 0 π¬ 1 π 0Then we review the current landscape of synthetic content governance methods as recorded by NIST AI 100-4 nvlpubs.nist.gov/nistpubs/ai/... and show 3 key limitations in governing AIG-NCII of adults
25.04.2025 17:59 β π 2 π 0 π¬ 1 π 0Technical prevention is then a challenge for sociotechnical AI governance. In our paper, we break down and map what we call the "malicious technical ecosystem" that is used to create AIG-NCII of adults, including open-source face swapping models and 200+ "nudifier" apps that are free & easy to use
25.04.2025 17:56 β π 2 π 0 π¬ 1 π 0There is a lot of work on responding to AIG-NCII through improved take down mechanisms (eg Take It Down Act) and legal recourse for survivors (eg DEFIANCE Act), but response without prevention places the burden of removal and justice-seeking on survivors and does nothing to stop the creation of NCII
25.04.2025 17:49 β π 2 π 0 π¬ 1 π 0AI generated NCII is a form of image based sexual abuse that results in severe mental, physical, financial, and reputational damage as well as a gendered chilling effect. myimagemychoice.org is an organization that documents this extensively through survivor testimonials
25.04.2025 17:45 β π 2 π 0 π¬ 1 π 0Excited to be presenting a new paper with @harinisuresh.bsky.social on the extremely critical topic of technical prevention/governance of adult AI generated non-consensual intimate images aka "deepfake pornography" at #CHI2025 chi-staig.github.io on 4/27 10:15-11:15 JST arxiv.org/abs/2504.17663 π§΅
25.04.2025 17:41 β π 7 π 2 π¬ 1 π 2Independent Bookstore Day - Saturday
25.04.2025 03:42 β π 0 π 1 π¬ 0 π 0Extremely proud to finally launch the SRC Handbook: a project that I began with @geomblog.bsky.social and Julia Netter 1 year ago to bring topics of AI governance, privacy, and accessibility etc. into Brown's CS courses. We now have an interdisciplinary team of 22 students on product/research! π·
25.04.2025 01:43 β π 7 π 2 π¬ 0 π 0The 23andME bankruptcy shows why data protection is important. But for genetic data, the problems are even more serious. Genetic data is used in so many places and is collected so widely that there are dangerous leaks everywhere. So much so that we wrote a paper on it. arxiv.org/abs/2502.09716 1/n
02.04.2025 13:32 β π 60 π 22 π¬ 4 π 0Excited to be joining a great lineup of speakers at the Technical AI Governance workshop in Vancouver this summer
If you are working on AI governance, definitely consider submitting!
#ICML2025
An illustration by Amandine Forrest of a woman with long hair and unshaven legs holding a flower in her left hand and a book with the Tilted Axis Press Logo in her right. It is a line drawing, the background is lavender, the lines are turquoise, and the hair is deep purple, and the flower and book logo are yellow.
We are excited to announce our 2025 Annual Subscriptions! Don't miss out on your chance to save on this year's titles.
For the first time, we have introduced an annual subscription specifically designed for North America!
www.tiltedaxispress.com/store/2025-uk-print-subscription
π¨Call for Presenters! π¨
Last semester at the Center for Tech Responsibility, we had a speaker series consisting of grad students presenting their ongoing work on sociotechnical computing (broadly conceived). It was fantastic: a relaxed environment, brief presentations, and lots of discussion. 1/4
"a delightful ride through the world of tech responsibility" should be the new cntr slogan
04.12.2024 18:14 β π 1 π 1 π¬ 0 π 0