๐จ Deadline Extended to Feb 5 (AoE)!
CFP still OPEN for the #AFAA2026 Workshop at @iclr-conf.bsky.social โ on fairness across alignment & agentic AI systems.
Full & tiny papers welcome โข Interdisciplinary work encouraged!
๐ afciworkshop.org
#ICLR2026 #AFAA2026
02.02.2026 17:48 โ
๐ 1
๐ 1
๐ฌ 0
๐ 1
AFAA 2026
The Algorithmic Fairness Across Alignment Procedures and Agentic Systems (AFAA) workshop aims to spark discussions on rethinking fairness in AI alignment procedures and agentic system development.
๐จ CFP OPEN! Weโre launching the #AFAA2026 Workshop at @iclr-conf.bsky.social on ๐ณ๐ฎ๐ถ๐ฟ๐ป๐ฒ๐๐ ๐ฎ๐ฐ๐ฟ๐ผ๐๐ ๐ฎ๐น๐ถ๐ด๐ป๐บ๐ฒ๐ป๐ ๐ฎ๐ป๐ฑ ๐ฎ๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ ๐๐๐๐๐ฒ๐บ๐.
Submit your latest ideas (full or tiny papers!)
Interdisciplinary work especially welcome :D
๐ Deadline: Jan 31 (AoE) | ๐ www.afciworkshop.org
#AFAA2026 #ICLR2026
06.01.2026 02:39 โ
๐ 6
๐ 2
๐ฌ 0
๐ 4
Four case studies with the gap between the reality of model use and their sandbox evaluations in audits... Definitely need to take a deeper dive, great presentation by Emily Black!
25.06.2025 08:52 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
Evaluations in the way the model would be deployed vs evaluations in only controlled unrealistic settings!
25.06.2025 08:52 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Allowing companies to do isolated audits can lead to D-Hacking!! More robust testing is needed...
25.06.2025 08:52 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Legal frameworks tend to have control over allocative decisions (Yes/No outcomes), which fit well with traditional ML systems... But not with GenAI systems
25.06.2025 08:52 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Zollo et al: Towards Effective Discrimination Testing for Generative AI
#FAccT2025
25.06.2025 08:43 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Nuance of stereotype errors is so important to understand their true harms... Insightful presentation by @angelinawang.bsky.social
25.06.2025 08:43 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
Women tend to report stereotype-reinforcing errors as more harmful while men tend to report stereotype-violating errors as more harmful...
25.06.2025 08:43 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Some items are more associated with men vs women (not surprising), but not all of them are equally harmful!!
25.06.2025 08:43 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Cognitive beliefs, attitudes and behaviours... Three ways to measure harms ('pragmatic harms')
25.06.2025 08:43 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Are all errors equally harmful? No! Stereotype-reinforcing errors vs stereotype-violating errors
25.06.2025 08:43 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Our understanding of stereotypes sometimes isn't indicative of reality.... they can appear in both directions, or might exist simply without harm
25.06.2025 08:43 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Wang et al: Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What Ways
#FAccT2025
25.06.2025 08:34 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Clear narrative and a great presentation by Cecilia Panigutti
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
Risk-measuring studies - Bringing it back to risk measurement, but this time with a clearly defined objective instead of risk-uncovering as before... Not just whether a risk exists, but 'how severe' is it?
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Interface-design studies - Focus on UI design elements which impact user interaction
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Reverse-engineering studies - Narrower scope and in-depth studies of how algorithms work... Methodological precision in the key!
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Risk-uncovering studies - Typical starts from anecdotal evidence and help surface new risks
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
A review organized not by data collection technique, but by DSA risk management framework categories
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Narrative review of algorithmic auditing studies, practical recommendation for best practices, and mapping to DSA obligations...
25.06.2025 08:33 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Panigutti et al: How to investigate algorithmic-driven risks in online platforms and search engines? A narrative review through the lens of the EU Digital Services Act
#FAccT2025
25.06.2025 08:22 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Such a broad topic... Excellent presentation by @feliciajing.bsky.social
25.06.2025 08:22 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
Historical methods working alongside many other ways of auditing these models can help us take advantage of the broader scope of historical evaluations....
25.06.2025 08:22 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
AI Audits have moved from bottom-up external evaluations to new age 'auditing companies'. While this has increased speed and scale, they have significantly narrowed the scope of auditing.
25.06.2025 08:22 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Why the history of AI assessments? A study through the lens of historical methods can help us understand neglected areas of auditing.
25.06.2025 08:22 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Sandoval and Jing: Historical Methods for AI Evaluations, Assessments, and Audits
#FAccT2025
25.06.2025 08:10 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Important recommendations on standardization of report creation and storage to allow better meta-analysis in the future... Eye opening presentation by @mkgerchick.bsky.social
25.06.2025 08:10 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
Applicants impacted by these tools, whose demographic data is missing, are completely removed from these audits!
25.06.2025 08:10 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Serious issues with the data usage... most weird for me: 'simulated test data'!
25.06.2025 08:10 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0