Michelle L. Ding's Avatar

Michelle L. Ding

@michelleding.bsky.social

organizer/researcher critically investigating how AI systems impact communities. cs phd @ brown cntr. she/her. 🌷 https://michelle-ding.github.io/ πŸ’­ https://michellelding.substack.com/

110 Followers  |  74 Following  |  26 Posts  |  Joined: 27.11.2024
Posts Following

Posts by Michelle L. Ding (@michelleding.bsky.social)

Left: 1942 "War Map" produced by Esso, now ExxonMobil, charting oil's role in transportation as  "key to victory". Right: 2024 map produced by NVIDIA as part of an investor presentation slide deck describing its involvement in sovereign AI efforts globally. On the left panel: "Nations are awakening to the imperative to produce artificial intelligence using their own infrastructure, data workforces and business networks."

Left: 1942 "War Map" produced by Esso, now ExxonMobil, charting oil's role in transportation as "key to victory". Right: 2024 map produced by NVIDIA as part of an investor presentation slide deck describing its involvement in sovereign AI efforts globally. On the left panel: "Nations are awakening to the imperative to produce artificial intelligence using their own infrastructure, data workforces and business networks."

The Commodification of AI Sovereignty: Lessons from the Fight for Sovereign Oil (with Kate E. Creasey, Taylor Lynn Curtis, and @geomblog.bsky.social), is out now on arXiv: www.arxiv.org/abs/2601.11763.

24.01.2026 06:27 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 3    πŸ“Œ 0
Preview
Making Sense of AI Policy Using Computational Tools | TechPolicy.Press A new report examines how to use computational tools to evaluate policy, with AI policy as a case study.

We released a new report in partnership with the Center for Tech Responsibility at Brown University on how policymakers and researchers can better analyze AI legislation to protect our civil rights and liberties.

10.01.2026 22:36 β€” πŸ‘ 175    πŸ” 59    πŸ’¬ 4    πŸ“Œ 1
Preview
Briefing: Grok brings nonconsensual image abuse to the masses Plus: a new feature in the Meta Ad Library and a new Telegram investigation tool.

Today on @indicator.media's free weekly briefing: The staggering impunity of xAI, which turned its abusive image generator on its own users in full view and has barely done anything to contain it.

09.01.2026 15:50 β€” πŸ‘ 9    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2
Preview
Caring for yourself and each other Resources for the Brown community, friends, family, loved ones and how to support us

New post by @michelleding.bsky.social on resources for the Brown community in the aftermath of the shooting. open.substack.com/pub/michelle...

21.12.2025 03:51 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

@mantzarlis.com and folks at @indicator.media have done incredible reporting & investigation on the AI nudification ecosystem that I'm constantly citing in my research on AIG-NCII - appreciate all the work you do!

04.12.2025 20:45 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Very glad to be a part of a new paper detailing how developers and developer platforms can prevent AIG-NCII, a form of image based sexual abuse that disproportionately harms women and girls. Thanks to all the collaborators and Max Kamachee & @scasper.bsky.social for leading this important project!

04.12.2025 20:38 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thanks to collaborators! This was a really interesting paper for me to work on, and it took a special group of interdisciplinary people to get it done.
Max Kamachee
@r-jy.bsky.social
@michelleding.bsky.social
@ankareuel.bsky.social
@stellaathena.bsky.social
@dhadfieldmenell.bsky.social

04.12.2025 17:32 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Spotify thought i was 87 if that makes you feel better

04.12.2025 03:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

ACM members/computing researchers who should be members interested in contributing should join the subcommittee's mailing list!

One of our goals here is to build policy coalitions across institutions so we can do more as a collective πŸ’ͺ and balance special interest groups.

25.11.2025 17:43 β€” πŸ‘ 16    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Preview
Serena Booth And Suresh Venkatasubramanian Co-Chair ACM’s US Technology Policy Committee’s Subcommittee On AI And Algorithms Brown CS faculty members Serena Booth and Suresh Venkatasubramanian have just been appointed to co-chair the AI and Algorithms Subcommittee, whose recent work includes responses to government RFIs, te...

@reniebird.bsky.social and I have just been appointed to co-Chair @TheOfficialACM's US Technology Policy Committee’s Subcommittee on AI and Algorithms. cs.brown.edu/news/2025/11...

25.11.2025 17:31 β€” πŸ‘ 19    πŸ” 2    πŸ’¬ 1    πŸ“Œ 2

PSA: tips to protect yourself from scams on Signal.

Every major comms platform has to contend w phishing, impersonation, & scams. Sadly.

Signal is major, and as we've grown we've heard about more of these attacks--scammy people pretending to be something or someone to trick and abuse others. 1/

11.11.2025 18:13 β€” πŸ‘ 546    πŸ” 241    πŸ’¬ 3    πŸ“Œ 12
Preview
Tech platforms promised to label AI content. They're not delivering. An Indicator audit of hundreds of synthetic images and videos reveals that platforms frequently fail to label AI content

Today on @indicator.media: A first-of-its-kind audit of AI labels on major social platforms.

23.10.2025 12:45 β€” πŸ‘ 91    πŸ” 39    πŸ’¬ 2    πŸ“Œ 5
Preview
Who has the luxury to think? Researchers are responsible for more than just papers.

Hi friends! After much thinking & doodling, I'm excited to share my new substack "Finding Peace in an AI-Everywhere World" 🌷 🌏

Here is the first article based on some reflections I had at COLM: michellelding.substack.com/p/who-has-th...

20.10.2025 15:35 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Technologies like synthetic data, evaluations, and red-teaming are often framed as enhancing AI privacy and safety. But what if their effects lie elsewhere?

In a new paper with @realbrianjudge.bsky.social at #EAAMO25, we pull back the curtain on AI safety's toolkit. (1/n)

arxiv.org/pdf/2509.22872

17.10.2025 21:09 β€” πŸ‘ 17    πŸ” 6    πŸ’¬ 1    πŸ“Œ 1
Preview
Rebuilding an Optimistic Vision for AI Policy Recall November 6, 2024 β€” the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was heartbroken not because of the rebuke to my political party...

I wrote a (personal) blog post about my hopes and dreams for AI policy, my devastation after the US Election, and my process of picking myself off the floor by rebuilding an optimistic vision for AI scientists in government through education: simons.berkeley.edu/news/rebuild...

13.10.2025 13:56 β€” πŸ‘ 14    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Grey background with CNTR logo and black text that says:

Testing LLMs in a sandbox isn’t responsible. Focusing on community uses and needs is. 

Michelle L. Ding, Jo Gasior-Kavishe, Victor Ojewale, and Suresh Venkatasubramanian

Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025
Full abstract available here bit.ly/solar-cntr 

Photos of authors in order of names above

Brown Data Science Institute, Center for Technological Repsonsibility, Reimagination, and Redesign

Grey background with CNTR logo and black text that says: Testing LLMs in a sandbox isn’t responsible. Focusing on community uses and needs is. Michelle L. Ding, Jo Gasior-Kavishe, Victor Ojewale, and Suresh Venkatasubramanian Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025 Full abstract available here bit.ly/solar-cntr Photos of authors in order of names above Brown Data Science Institute, Center for Technological Repsonsibility, Reimagination, and Redesign

11.10.2025 16:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Grey background with black text that says:

Slide 1: What can you do as a researcher?
Look into your local community - develop sustainable, long term partnerships with city-level or state-level coalitions, networks, news agencies, or non-profit organizations that do on the ground work to discover harms
University researchers – look beyond your department into other disciplines in the humanities and social sciences to ground your research in interdisciplinary methods
Industry researchers – look towards trust & safety teams, policy teams, human rights teams and establish collaborations where research reflects real uses. Or advocate to (re)establish these teams.


Slide 2: Contribute to a crowdsourced, open source database: bit.ly/solar-cntr

Case studies are a great way for us to share, learn, teach, and reflect on methods together. 
At SoLaR 2025, we hope to crowdsource a spreadsheet of community-driven LLM evaluation case studies that orient us towards contextualized, participatory, expertise driven evaluations. 
Please add any your own or related work to our spreadsheet :) And let’s stay connected! We also have a tab for contact info!

Grey background with black text that says: Slide 1: What can you do as a researcher? Look into your local community - develop sustainable, long term partnerships with city-level or state-level coalitions, networks, news agencies, or non-profit organizations that do on the ground work to discover harms University researchers – look beyond your department into other disciplines in the humanities and social sciences to ground your research in interdisciplinary methods Industry researchers – look towards trust & safety teams, policy teams, human rights teams and establish collaborations where research reflects real uses. Or advocate to (re)establish these teams. Slide 2: Contribute to a crowdsourced, open source database: bit.ly/solar-cntr Case studies are a great way for us to share, learn, teach, and reflect on methods together. At SoLaR 2025, we hope to crowdsource a spreadsheet of community-driven LLM evaluation case studies that orient us towards contextualized, participatory, expertise driven evaluations. Please add any your own or related work to our spreadsheet :) And let’s stay connected! We also have a tab for contact info!

More resources here: bit.ly/solar-cntr 🌱 Happy to chat with anyone interested in learning more!

11.10.2025 16:07 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Brown background with CNTR logo with white text that says:

Responsibility is not marginally improving LLMs to fit every use case.

Responsibility is knowing when to use them and when not to.

Brown background with CNTR logo with white text that says: Responsibility is not marginally improving LLMs to fit every use case. Responsibility is knowing when to use them and when not to.

11.10.2025 16:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Brown background with CNTR logo with white text that says:

In LLM evaluations, we - researchers - are not the experts. 

We are the facilitators…

…who enable communities to evaluate LLMs for themselves

When we allow their expertise and lived experiences to guide our methodology…

We will organically develop more meaningful scientific insights…

And escape the vicious cycle of abstract benchmarks that only serve to misdirect and confuse us.

Brown background with CNTR logo with white text that says: In LLM evaluations, we - researchers - are not the experts. We are the facilitators… …who enable communities to evaluate LLMs for themselves When we allow their expertise and lived experiences to guide our methodology… We will organically develop more meaningful scientific insights… And escape the vicious cycle of abstract benchmarks that only serve to misdirect and confuse us.

11.10.2025 16:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Brown background and CNTR logo with white text that says:

Community-driven evaluations yield good scientific insights. 

They strip away marketing, hype, and unnecessary abstractions

And reveal the contexts where LLMs might genuinely add value and where they do not.

This opens space for meaningful dialogue between stakeholders across academia, industry, government, and civil society

And empowers the public to engage with and assess for themselves whether the tools they are being presented with can be used responsibly.

Brown background and CNTR logo with white text that says: Community-driven evaluations yield good scientific insights. They strip away marketing, hype, and unnecessary abstractions And reveal the contexts where LLMs might genuinely add value and where they do not. This opens space for meaningful dialogue between stakeholders across academia, industry, government, and civil society And empowers the public to engage with and assess for themselves whether the tools they are being presented with can be used responsibly.

Thank you everyone who came to our talk! Here are some highlights:

11.10.2025 16:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’‘We kicked off the SoLaR workshop at #COLM2025 with a great opinion talk by @michelleding.bsky.social & Jo Gasior Kavishe (joint work with @victorojewale.bsky.social and
@geomblog.bsky.social
) on "Testing LLMs in a sandbox isn't responsible. Focusing on community use and needs is."

10.10.2025 14:31 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Preview
Agree to Disagree? A Meta-Evaluation of LLM Misgendering Numerous methods have been proposed to measure LLM misgendering, including probability-based evaluations (e.g., automatically with templatic sentences) and generation-based evaluations (e.g., with aut...

Have you or a loved one been misgendered by an LLM? How can we evaluate LLMs for misgendering? Do different evaluation methods give consistent results?
Check out our preprint led by the newly minted Dr. @arjunsubgraph.bsky.social, and with Preethi Seshadri, Dietrich Klakow, Kai-Wei Chang, Yizhou Sun

11.06.2025 13:28 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 3
Preview
An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies In an era of increasing societal fragmentation, political polarization, and erosion of public trust in institutions, representative deliberative assemblies are emerging as a promising democratic forum...

🚨 New preprint! 🚨
Excited to share my work: An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies πŸ€–πŸ—³οΈ

I’ll be presenting this at @colmweb.org in the NLP4Democracy workshop!

πŸ”— arxiv.org/abs/2509.12577

17.09.2025 17:40 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Preview
Testing LLMs in a sandbox isn’t responsible. Focusing on community use and needs is.

Read our full opinion abstract here: cntr.brown.edu/news/2025-09...

09.10.2025 19:33 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025 COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada

Hi #COLM2025! πŸ‡¨πŸ‡¦ I will be presenting a talk on the importance of community-driven LLM evaluations based on an opinion abstract I wrote with Jo Kavishe, @victorojewale.bsky.social and @geomblog.bsky.social tomorrow at 9:30am in 524b for solar-colm.github.io

Hope to see you there!

09.10.2025 19:32 β€” πŸ‘ 9    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Preview
Brown University to lead national institute focused on intuitive, trustworthy AI assistants A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental a...

Very excited to be part of this new AI Institute that is being led by Ellie Pavlick @brown.edu and to be able to work with so many experts, including @datasociety.bsky.social

www.brown.edu/news/2025-07...

29.07.2025 15:26 β€” πŸ‘ 23    πŸ” 4    πŸ’¬ 2    πŸ“Œ 3
A poster for the paper "Position: Strong Consumer Protection is an Inalienable Defense for AI Safety in the United States"

A poster for the paper "Position: Strong Consumer Protection is an Inalienable Defense for AI Safety in the United States"

I'll be presenting a position paper about consumer protection and AI in the US at ICML. I have a surprisingly optimistic take: our legal structures are stronger than I anticipated when I went to work on this issue in Congress.

Is everything broken rn? Yes. Will it stay broken? That's on us.

14.07.2025 13:01 β€” πŸ‘ 19    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Preview
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.

With their 'Sovereignty as a Service' offerings, tech companies are encouraging the illusion of a race for sovereign control of AI while being the true powers behind the scenes, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.

07.07.2025 13:07 β€” πŸ‘ 15    πŸ” 11    πŸ’¬ 0    πŸ“Œ 6
Preview
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.

Very excited to see this piece out in @techpolicypress.bsky.social today. This was written together with @r-jy.bsky.social and Kate Elizabeth Creasey (a historian here at Brown), and calls out what we think is a scary and interesting rhetorical shift.

www.techpolicy.press/sovereignty-...

07.07.2025 13:50 β€” πŸ‘ 21    πŸ” 8    πŸ’¬ 0    πŸ“Œ 4
Preview
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act The shape of AI regulation is beginning to emerge, most prominently through the EU AI Act (the "AIA"). By 2027, the AIA will be in full effect, and firms are starting to adjust their behavior in light...

So the EU AI Act passed. Companies have to comply. AI regulation is here to stay. Right? Right?

FAccT 2025 paper with @r-jy.bsky.social and Bill Marino (not on bsky) πŸ“œ incoming! 1/n

arxiv.org/abs/2506.01931

12.06.2025 22:33 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0