Agonistic Image Generation: Unsettling the Hegemony of Intention
Current image generation paradigms prioritize actualizing user intention - "see what you intend" - but often neglect the sociopolitical dimensions of this process. However, it is increasingly evident ...
Our paper on “Agonistic Image Generation”: arxiv.org/abs/2502.15242.
In collaboration with @andreiskiii.bsky.social, @ranjaykrishna.bsky.social, and @axz.bsky.social.
Presenting at #FAccT2025 on Thursday, June 26 in "Group Behaviors and User Experiences" with @andreiskiii.bsky.social! 6/6
25.06.2025 20:58 — 👍 0 🔁 0 💬 0 📌 0
We tested this with 29 participants. Results: it encourages deeper thinking about representation, and users were more responsive to this approach than to superficial "diversity interventions." 5/6
25.06.2025 20:58 — 👍 0 🔁 0 💬 1 📌 0
We built an interface that researches controversies around user prompts, generates multiple interpretations (especially controversial ones), and encourages users to reflect on them before creating images. 4/6
25.06.2025 20:58 — 👍 0 🔁 0 💬 1 📌 0
Most AI image tools aim to actualize your “mental image” of a prompt. But your mental image may not be the whole picture. What if interfaces helped users engage with broader debates about what prompts could mean—pursuing an interactive, social notion of diversity? 3/6
25.06.2025 20:58 — 👍 0 🔁 0 💬 1 📌 0
Why do we seem to care about some forms of diversity but less about others? The controversy revealed something important: authentic diversity isn't just “correct” demographic representation — it's about engaging with a pluralism of real human conflicts, views, and interests. 2/6
25.06.2025 20:58 — 👍 0 🔁 0 💬 1 📌 0
[FAccT 2025 paper!] 🧵
Remember the March 2024 Gemini controversy? People were upset when it generated "diverse" but historically inappropriate images—like Black Founding Fathers and female Asian Nazis. This sparked our research... 1/6
25.06.2025 20:52 — 👍 0 🔁 0 💬 1 📌 0
Agonistic Image Generation: Unsettling the Hegemony of Intention
Current image generation paradigms prioritize actualizing user intention - "see what you intend" - but often neglect the sociopolitical dimensions of this process. However, it is increasingly evident ...
This upcoming #FAccT2025 paper was w/ an amazing duo of undergrads @andreiskiii.bsky.social @andrewshawuw.bsky.social & deeply fuses philosophy with human-AI interaction design. "Unsettling the hegemony of intention" indeed! 😛 It also won the undergrad thesis award at UW 🏅 arxiv.org/abs/2502.15242
24.06.2025 03:45 — 👍 25 🔁 6 💬 0 📌 0
my Bluesky manifestation / HCI PhD @ CMU exploring the power of everyday people to resist harmful algorithmic systems
all the other whatever at uhleeeeeeeshuh.com
~also enjoys weaving, musicals, grammar, ice cream, libraries~
PhD Candidate @ CMU HCII | Critical Computing + STS + Queer Art | he/they | https://jtaylor.gay | On the Job Market
NYT bestselling author of EMPIRE OF AI: empireofai.com. ai reporter. national magazine award & american humanist media award winner. words in The Atlantic. formerly WSJ, MIT Tech Review, KSJ@MIT. email: http://karendhao.com/contact.
Research & code: Research director @inria
►Data, Health, & Computer science
►Python coder, (co)founder of scikit-learn, joblib, & @probabl.bsky.social
►Sometimes does art photography
►Physics PhD
New here 👋🏽 PhD researcher on AI Alignment and Digital Democracy at ETH Zurich. Born in Australia, raised in Taiwan, based in Switzerland — at home in all. I look to history for what could be preserved, and digital democracy for what might be possible.
1st year PhD student studying human-centered AI at the University of Minnesota.
Website: https://malikkhadar.github.io/
CS PhD candidate at GroupLens Lab, UMN
MEng, BA philosophy, comp sci @ 🌽ell
yoga teacher, dog mom, aspiring matriarch of AI ethics
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Incoming PhD @ MIT EECS. ugrad philosophy & CS @ UWa
University of Washington CSE PhD Student
Building my data praxis and researching how data scientists build theirs
Scholar, author, policy advisor
alondranelson.com
Science, Technology, and Social Values Lab
https://www.ias.edu/stsv-lab
Making government data make sense. No partisanship, no bias — just facts as clear as blue skies.
🔗: usafacts.org
Professor of social computing at UW CSE, leading @socialfutureslab.bsky.social
social.cs.washington.edu
Climate & AI Lead @HuggingFace, TED speaker, WiML board member, TIME AI 100 (She/her/Dr/🦋)
Personal Account
Founder: The Distributed AI Research Institute @dairinstitute.bsky.social.
Author: The View from Somewhere, a memoir & manifesto arguing for a technological future that serves our communities (to be published by One Signal / Atria
data privacy/cybersecurity attorney by day, tech law professor/clinic director by night. into data rights, not into data wrongs.
Professor, researcher, author of Atlas of AI.
Latest work: Calculating Empires ~ www.calculatingempires.com
Book: https://thecon.ai
Web: https://faculty.washington.edu/ebender