Emma Harvey's Avatar

Emma Harvey

@emmharv.bsky.social

PhD student @ Cornell info sci | Sociotechnical fairness & algorithm auditing | Previously MSR FATE, Penn | https://emmaharv.github.io/

978 Followers  |  361 Following  |  77 Posts  |  Joined: 30.07.2023  |  2.6507

Latest posts by emmharv.bsky.social on Bluesky

This is such a great paper and really helps to emphasize how data under specification in ML systems bias our understanding and decision making. Especially in inequitable resource scarce settings. Thanks for sharing @emmharv.bsky.social !

27.07.2025 23:38 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Emma has such good research taste :)

Given the sheer scale of these events, its really helpful to see what caught people's eye at these conferences...

26.07.2025 02:52 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency You will be notified whenever a record that you have chosen has been cited.

What's super cool to me about this paper is that it does a longitudinal analysis - so many audit studies stick to a single point in time, and this paper is a great demonstration that the data available to you at that time will inevitably impact your measurements.

πŸ”—: dl.acm.org/doi/10.1145/...

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The authors find that delays in reporting patient demographics are the norm (for >50% of patients, race is reported >60 days after other details like DOB). These delays obfuscate measurement of health outcomes and health disparities, and techniques like imputing race do not improve measurement.

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title and author list: 

Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments 
Jennah Gosciak, Aparna Balagopalan, Derek Ouyang, Allison Koenecke, Marzyeh Ghassemi, Daniel E. Ho

Screenshot of paper title and author list: Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments Jennah Gosciak, Aparna Balagopalan, Derek Ouyang, Allison Koenecke, Marzyeh Ghassemi, Daniel E. Ho

⏳ Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments by @jennahgosciak.bsky.social and @aparnabee.bsky.social et al. (incl. @allisonkoe.bsky.social @marzyehghassemi.bsky.social) analyzes how missing demographic data impacts estimates of health disparities.

24.07.2025 19:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems You will be notified whenever a record that you have chosen has been cited.

I love how this paper emphasizes that evaluation != accountability and recommends steps towards accountability: ensuring that tools are open, valid, and reliable; focusing on tools to support participatory methods; and ensuring auditors are protected from retaliation.

πŸ”—: dl.acm.org/doi/full/10....

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

The authors analyze 435(!) tools, finding that most focus on evaluation – but tools for other aspects of AI audits, like harms discovery, communicating audit results, and advocating for change, are less common. Further, while many tools are freely available, auditors often struggle to use them.

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title and author list: 

Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling
Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji

Screenshot of paper title and author list: Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji

Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling by @victorojewale.bsky.social @rbsteed.com @briana-v.bsky.social @abeba.bsky.social @rajiinio.bsky.social compares the landscape of AI audit tools (tools.auditing-ai.com) to the actual needs of AI auditors.

24.07.2025 19:52 β€” πŸ‘ 21    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency

This work won a πŸ†Best Paper AwardπŸ† at FAccT!Β I think it's a fantastic example of an external audit that not only identifies a problem but also provides concrete steps towards a solution.

πŸ”—: dl.acm.org/doi/10.1145/...

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The authors show that the external review established by the settlement is insufficient to guarantee that Meta is actually reducing discrimination in ad delivery (as opposed to adversarially complying by showing the same ad repeatedly to one person or applying VRS only to small ad campaigns).

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title and author list: 

External Evaluation of Discrimination Mitigation Efforts in Meta’s Ad Delivery 
Basileal Imana, Zeyu Shen, John Heidemann, Aleksandra Korolova

Screenshot of paper title and author list: External Evaluation of Discrimination Mitigation Efforts in Meta’s Ad Delivery Basileal Imana, Zeyu Shen, John Heidemann, Aleksandra Korolova

πŸ“± External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery by Imana et al. audits VRS (Meta’s process for reducing bias in ad delivery as part of a settlement with DOJ), and finds VRS reduces demographic differences in ad audience – but also reduces reach and increases cost.

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I thought this paper was really interesting, and I particularly appreciated the authors' point that models can make decisions that are "consistent" but still "arbitrary" if model selection is not done in a principled way!

πŸ”—: dl.acm.org/doi/10.1145/...

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The authors propose that opportunity pluralism is most important in domains that involve normative or high-uncertainty decisions, or where decision-subject can choose among multiple decision-makers. Even in those domains, the authors argue that individual models should still be consistent.

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title and author list: 

Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making
Shira Gur-Arieh, Christina Lee

Screenshot of paper title and author list: Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making Shira Gur-Arieh, Christina Lee

🎲 Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making by Gur-Arieh and Lee explores the competing desires for consistency in decision-making models and opportunity pluralism in decision-making ecosystems.

24.07.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Check out our work at @ic2s2.bsky.social this afternoon during the Communication & Cooperation II session!

23.07.2025 10:01 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Annual Meeting of the Association for Computational Linguistics (2025) - ACL Anthology pdf bibProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Wanxiang Che | Joyce Nabende | Ekaterina Shutova | Mohammad Taher Pilehvar

πŸ₯³ πŸŽ‰ ❀️ The ACL 2025 Proceedings are live on the ACL Anthology πŸ₯° !
We’re thrilled to pre-celebrate the incredible research πŸ“š ✨ that will be presented starting Monday next week in Vienna πŸ‡¦πŸ‡Ή !
Start exploring πŸ‘‰ aclanthology.org/events/acl-2...
#NLProc #ACL2025NLP #ACLAnthology

22.07.2025 20:00 β€” πŸ‘ 57    πŸ” 19    πŸ’¬ 0    πŸ“Œ 1

I broke the thread πŸ˜…

21.07.2025 15:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency

My favorite part of this paper was the point that "synthetic data creates distance between individuals and the data...derived from [them]." Synthetic data is often considered privacy-preserving, but it can actually reduce opportunities for participation and redress!

πŸ”—: dl.acm.org/doi/10.1145/...

21.07.2025 15:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The authors find that, while synthetic data has benefits (e.g., preventing humans from annotating harmful content), it can also flatten identity and reinforce stereotypes. Its quality is challenging to validate, esp. if the same auxiliary models are used to produce training and evaluation data.

21.07.2025 15:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A screenshot of the paper title and author list: 

Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline 
Shivani Kapania, Stephanie Ballard, Alex Kessler,  Jennifer Wortman Vaughan

A screenshot of the paper title and author list: Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline Shivani Kapania, Stephanie Ballard, Alex Kessler, Jennifer Wortman Vaughan

πŸ”‘ Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline by Kapania et al. (incl. @jennwv.bsky.social) asks: what are practitioners' motivations, current practices, desiderata, and challenges when generating, using, and validating synthetic data to develop AI?

21.07.2025 15:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency

I love how this paper (in partnership with @workerinfox.bsky.social) engaged with drivers to center the questions that drivers actually had about their pay! I am also very intrigued by the prospect of using DSARs to conduct large-scale algorithm audits πŸ‘€

πŸ”—: dl.acm.org/doi/10.1145/...

21.07.2025 15:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Uber does not share detailed pay information with drivers, so the authors relied on Data Subject Access Requests (which GDPR requires Uber to fulfill). They find that, under dynamic pricing, pay-per-hour fell, standby time increased, and the share of fare that drivers received varied ride-to-ride.

21.07.2025 15:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title and author list:

Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing
Reuben Binns, Jake Stein, Siddhartha Datta, Max Van Kleek, Nigel Shadbolt

Screenshot of paper title and author list: Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing Reuben Binns, Jake Stein, Siddhartha Datta, Max Van Kleek, Nigel Shadbolt

πŸš— Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing by @rdbinns.bsky.social @jmlstein.bsky.social et al. (incl. @emax.bsky.social) audits Uber's pay practices, focusing on the shift to paying drivers a "dynamic" (opaque, unpredictable) share of fare.

21.07.2025 15:47 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Preview
Review Experience of Global Majority Scholars We invite scholars who are either from the Global Majority or conduct research in the Global Majority to share their experiences of publishing in interdisciplinary venues such as CHI, CSCW, FAccT, Ubi...

Are you an HCI researcher from or who studies the Global Majority? Reviewed research about Global Majority for HCI venues?

@farhana-shahid.bsky.social & I are conducting research on peer review experience of research by and about Global Majority.
participation form: docs.google.com/forms/d/e/1F...

16.07.2025 16:48 β€” πŸ‘ 5    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

of course, thank you for writing it!!

15.07.2025 22:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If you saw me post then delete, it's because I accidentally linked to a different @narijohnson.bsky.social et al. paper from last year's FAccT about algorithmic abandonment (which is also excellent, and which I am linking below) πŸ˜… πŸ˜…

πŸ”—: dl.acm.org/doi/10.1145/...

15.07.2025 16:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
As Government Outsources More IT, Highly Skilled In-House Technologists Are More Essential | Communications of the ACM

If you liked @narijohnson.bsky.social's paper and are interested in the importance of building in-house tech skills in government, check out this related ACM opinion piece by @isabelcorpus.bsky.social @giannella.bsky.social @allisonkoe.bsky.social @donmoyn.bsky.social!

πŸ”—: dl.acm.org/doi/10.1145/...

15.07.2025 16:31 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees' Practices, Challenges, and Needs | Proceedings of the 2025 ACM Conference on Fairness, Accountability,...

I love how this paper takes the time to really understand and explain what "legacy procurement practices" actually mean (and how they vary across jurisdictions!) and how it lays out a clear roadmap for the FAccT community to help address procurement issues!

πŸ”—: dl.acm.org/doi/10.1145/...

15.07.2025 16:31 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

The authors found that AI is often *not* acquired via formal processes, meaning efforts to reform procurement are not applicable to most AI acquisitions. Further, many AI vendors do not cooperate with government efforts to mitigate AI harms – calling the impact of "purchasing power" into question!

15.07.2025 16:31 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of paper title and author list: 

Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs
Nari Johnson, Elise Silva, Harrison Leon, Motahhare Eslami, Beth Schwanke, Ravit Dotan, Hoda Heidari

Screenshot of paper title and author list: Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs Nari Johnson, Elise Silva, Harrison Leon, Motahhare Eslami, Beth Schwanke, Ravit Dotan, Hoda Heidari

🏦 Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs by @narijohnson.bsky.social et al. explores procurement in the context of recent calls for governments to use their "purchasing power" to incentivize responsible AI.

15.07.2025 16:31 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

@emmharv is following 20 prominent accounts