This is such a great paper and really helps to emphasize how data under specification in ML systems bias our understanding and decision making. Especially in inequitable resource scarce settings. Thanks for sharing @emmharv.bsky.social !
27.07.2025 23:38 β π 2 π 1 π¬ 0 π 0
Emma has such good research taste :)
Given the sheer scale of these events, its really helpful to see what caught people's eye at these conferences...
26.07.2025 02:52 β π 9 π 2 π¬ 0 π 0
Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
You will be notified whenever a record that you have chosen has been cited.
What's super cool to me about this paper is that it does a longitudinal analysis - so many audit studies stick to a single point in time, and this paper is a great demonstration that the data available to you at that time will inevitably impact your measurements.
π: dl.acm.org/doi/10.1145/...
24.07.2025 19:52 β π 2 π 0 π¬ 0 π 0
The authors find that delays in reporting patient demographics are the norm (for >50% of patients, race is reported >60 days after other details like DOB). These delays obfuscate measurement of health outcomes and health disparities, and techniques like imputing race do not improve measurement.
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
Screenshot of paper title and author list:
Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments
Jennah Gosciak, Aparna Balagopalan, Derek Ouyang, Allison Koenecke, Marzyeh Ghassemi, Daniel E. Ho
β³ Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments by @jennahgosciak.bsky.social and @aparnabee.bsky.social et al. (incl. @allisonkoe.bsky.social @marzyehghassemi.bsky.social) analyzes how missing demographic data impacts estimates of health disparities.
24.07.2025 19:52 β π 3 π 0 π¬ 1 π 0
Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
I love how this paper emphasizes that evaluation != accountability and recommends steps towards accountability: ensuring that tools are open, valid, and reliable; focusing on tools to support participatory methods; and ensuring auditors are protected from retaliation.
π: dl.acm.org/doi/full/10....
24.07.2025 19:52 β π 2 π 0 π¬ 2 π 0
The authors analyze 435(!) tools, finding that most focus on evaluation β but tools for other aspects of AI audits, like harms discovery, communicating audit results, and advocating for change, are less common. Further, while many tools are freely available, auditors often struggle to use them.
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
Screenshot of paper title and author list:
Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling
Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji
Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling by @victorojewale.bsky.social @rbsteed.com @briana-v.bsky.social @abeba.bsky.social @rajiinio.bsky.social compares the landscape of AI audit tools (tools.auditing-ai.com) to the actual needs of AI auditors.
24.07.2025 19:52 β π 21 π 2 π¬ 1 π 0
External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
This work won a πBest Paper Awardπ at FAccT!Β I think it's a fantastic example of an external audit that not only identifies a problem but also provides concrete steps towards a solution.
π: dl.acm.org/doi/10.1145/...
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
The authors show that the external review established by the settlement is insufficient to guarantee that Meta is actually reducing discrimination in ad delivery (as opposed to adversarially complying by showing the same ad repeatedly to one person or applying VRS only to small ad campaigns).
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
Screenshot of paper title and author list:
External Evaluation of Discrimination Mitigation Efforts in Metaβs Ad Delivery
Basileal Imana, Zeyu Shen, John Heidemann, Aleksandra Korolova
π± External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery by Imana et al. audits VRS (Metaβs process for reducing bias in ad delivery as part of a settlement with DOJ), and finds VRS reduces demographic differences in ad audience β but also reduces reach and increases cost.
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
I thought this paper was really interesting, and I particularly appreciated the authors' point that models can make decisions that are "consistent" but still "arbitrary" if model selection is not done in a principled way!
π: dl.acm.org/doi/10.1145/...
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
The authors propose that opportunity pluralism is most important in domains that involve normative or high-uncertainty decisions, or where decision-subject can choose among multiple decision-makers. Even in those domains, the authors argue that individual models should still be consistent.
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
Screenshot of paper title and author list:
Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making
Shira Gur-Arieh, Christina Lee
π² Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making by Gur-Arieh and Lee explores the competing desires for consistency in decision-making models and opportunity pluralism in decision-making ecosystems.
24.07.2025 19:52 β π 2 π 0 π¬ 1 π 0
Check out our work at @ic2s2.bsky.social this afternoon during the Communication & Cooperation II session!
23.07.2025 10:01 β π 8 π 1 π¬ 0 π 0
Annual Meeting of the Association for Computational Linguistics (2025) - ACL Anthology
pdf bibProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Wanxiang Che | Joyce Nabende | Ekaterina Shutova | Mohammad Taher Pilehvar
π₯³ π β€οΈ The ACL 2025 Proceedings are live on the ACL Anthology π₯° !
Weβre thrilled to pre-celebrate the incredible research π β¨ that will be presented starting Monday next week in Vienna π¦πΉ !
Start exploring π aclanthology.org/events/acl-2...
#NLProc #ACL2025NLP #ACLAnthology
22.07.2025 20:00 β π 57 π 19 π¬ 0 π 1
I broke the thread π
21.07.2025 15:49 β π 2 π 0 π¬ 1 π 0
Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
My favorite part of this paper was the point that "synthetic data creates distance between individuals and the data...derived from [them]." Synthetic data is often considered privacy-preserving, but it can actually reduce opportunities for participation and redress!
π: dl.acm.org/doi/10.1145/...
21.07.2025 15:47 β π 1 π 0 π¬ 0 π 0
The authors find that, while synthetic data has benefits (e.g., preventing humans from annotating harmful content), it can also flatten identity and reinforce stereotypes. Its quality is challenging to validate, esp. if the same auxiliary models are used to produce training and evaluation data.
21.07.2025 15:47 β π 0 π 0 π¬ 1 π 0
A screenshot of the paper title and author list:
Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline
Shivani Kapania, Stephanie Ballard, Alex Kessler, Jennifer Wortman Vaughan
π‘ Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline by Kapania et al. (incl. @jennwv.bsky.social) asks: what are practitioners' motivations, current practices, desiderata, and challenges when generating, using, and validating synthetic data to develop AI?
21.07.2025 15:47 β π 0 π 0 π¬ 1 π 0
Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
I love how this paper (in partnership with @workerinfox.bsky.social) engaged with drivers to center the questions that drivers actually had about their pay! I am also very intrigued by the prospect of using DSARs to conduct large-scale algorithm audits π
π: dl.acm.org/doi/10.1145/...
21.07.2025 15:47 β π 1 π 0 π¬ 1 π 0
Uber does not share detailed pay information with drivers, so the authors relied on Data Subject Access Requests (which GDPR requires Uber to fulfill). They find that, under dynamic pricing, pay-per-hour fell, standby time increased, and the share of fare that drivers received varied ride-to-ride.
21.07.2025 15:47 β π 1 π 0 π¬ 1 π 0
Screenshot of paper title and author list:
Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing
Reuben Binns, Jake Stein, Siddhartha Datta, Max Van Kleek, Nigel Shadbolt
π Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing by @rdbinns.bsky.social @jmlstein.bsky.social et al. (incl. @emax.bsky.social) audits Uber's pay practices, focusing on the shift to paying drivers a "dynamic" (opaque, unpredictable) share of fare.
21.07.2025 15:47 β π 6 π 0 π¬ 1 π 1
Review Experience of Global Majority Scholars
We invite scholars who are either from the Global Majority or conduct research in the Global Majority to share their experiences of publishing in interdisciplinary venues such as CHI, CSCW, FAccT, Ubi...
Are you an HCI researcher from or who studies the Global Majority? Reviewed research about Global Majority for HCI venues?
@farhana-shahid.bsky.social & I are conducting research on peer review experience of research by and about Global Majority.
participation form: docs.google.com/forms/d/e/1F...
16.07.2025 16:48 β π 5 π 6 π¬ 0 π 0
of course, thank you for writing it!!
15.07.2025 22:09 β π 1 π 0 π¬ 0 π 0
If you saw me post then delete, it's because I accidentally linked to a different @narijohnson.bsky.social et al. paper from last year's FAccT about algorithmic abandonment (which is also excellent, and which I am linking below) π
π
π: dl.acm.org/doi/10.1145/...
15.07.2025 16:35 β π 1 π 0 π¬ 0 π 0
As Government Outsources More IT, Highly Skilled In-House Technologists Are More Essential | Communications of the ACM
If you liked @narijohnson.bsky.social's paper and are interested in the importance of building in-house tech skills in government, check out this related ACM opinion piece by @isabelcorpus.bsky.social @giannella.bsky.social @allisonkoe.bsky.social @donmoyn.bsky.social!
π: dl.acm.org/doi/10.1145/...
15.07.2025 16:31 β π 9 π 1 π¬ 1 π 0
Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees' Practices, Challenges, and Needs | Proceedings of the 2025 ACM Conference on Fairness, Accountability,...
I love how this paper takes the time to really understand and explain what "legacy procurement practices" actually mean (and how they vary across jurisdictions!) and how it lays out a clear roadmap for the FAccT community to help address procurement issues!
π: dl.acm.org/doi/10.1145/...
15.07.2025 16:31 β π 8 π 0 π¬ 2 π 0
The authors found that AI is often *not* acquired via formal processes, meaning efforts to reform procurement are not applicable to most AI acquisitions. Further, many AI vendors do not cooperate with government efforts to mitigate AI harms β calling the impact of "purchasing power" into question!
15.07.2025 16:31 β π 4 π 0 π¬ 1 π 0
Screenshot of paper title and author list:
Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employeesβ Practices, Challenges, and Needs
Nari Johnson, Elise Silva, Harrison Leon, Motahhare Eslami, Beth Schwanke, Ravit Dotan, Hoda Heidari
π¦ Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employeesβ Practices, Challenges, and Needs by @narijohnson.bsky.social et al. explores procurement in the context of recent calls for governments to use their "purchasing power" to incentivize responsible AI.
15.07.2025 16:31 β π 8 π 1 π¬ 2 π 0
PhDing at Cornell
Disability, AI Fairness, and Human-AI Interaction
Research Lead, Civic & Democratic Participation Online at the Responsible Innovation Centre, BBC R&D
Background in computational social science, looking for PhD opportunities starting in 2026/27
Wrote my MSc dissertation on podcasts & politics
Graphics & Society @ MIT CSAIL.
Prev. Meta, Disney Research.
Mother of a half-dog, half-tornado and a baby seal. πΆπΆ
she/her ππΉ
Critical AI literacy. Runs WeandAI.org and BetterImagesofAI.org
TLA Tech for Disability and RSA RAIN, AI and Ethics Journal, 100 Brilliant Women in AI Ethicsβ’ 2021, Computer Weekly Women in UK Tech Rising Star
Pssst the Broligarchs donβt care about us
Researcher, technologist, artist. π Public interest tech at Data & Society @datasociety.bsky.social & Slow Machines. she/her γ°οΈ
Prev: Head of Open Source Research @mozilla.org Foundation, Tech Fellow @hrw.org, engineering & art @ITP-NYU.
nyc/slc
The Maybe is a media studio, collective, and consultancy challenging the power and politics of tech.
Home to the Computer Says Maybe podcast.
We are a researcher community developing scientifically grounded research outputs and robust deployment infrastructure for broader impact evaluations.
https://evalevalai.com/
Research Fellow @BKCHarvard. Previously @openai @ainowinstitute @nycedc. Views are yours, of my posts. #isagiwhatwewant
Associate Research Professor at Georgetown Better Gov Lab & Massive Data Institute: 1) Evidence-based implementation for safety net programs 2) Open source tools for data sharing in public services. Founded and led Data Science team at Code for America.
Harvard CS PhD Candidate. Interested in algorithmic decision-making, data-centric ML, and applications to public sector operations
A research center at Penn Engineering, working to foster research and innovation in interconnected social, economic and technological systems.
great value chidi anagonye. more seriously Societal Computing PhD student at Carnegie Mellon University. not just an ML account.
Tech and the critique of political economy // Political theory PhD Candidate at Johns Hopkins // Second Faculty at The Brooklyn Institute for Social Research // Researcher at IBM.
A field-defining intellectual hub for data science research, education, and outreach at the University of Chicago. Follow us on IG & Tik Tok: DSI_UChicago
AI, sociotechnical systems, social purpose. Research director at Google DeepMind. Cofounder and Chair at Deep Learning Indaba. FAccT2025 co-program chair. shakirm.com
Research & code: Research director @inria
βΊData, Health, & Computer science
βΊPython coder, (co)founder of scikit-learn, joblib, & @probabl.bsky.social
βΊSometimes does art photography
βΊPhysics PhD
New here ππ½ PhD researcher on AI Alignment and Digital Democracy at ETH Zurich. Born in Australia, raised in Taiwan, based in Switzerland β at home in all. I look to history for what could be preserved, and digital democracy for what might be possible.
CS PhD Student at the University of Rochester
Trying to help privacy happen. Future media law prof @ Syracuse U
More on me here: www.alexisshoreingber.com
Studying people and computers (https://www.nickmvincent.com/)
Blogging about data and steering AI (https://dataleverage.substack.com/)