Print screen of the first page of a paper pre-print titled "Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor" by Olteanu et al. Paper abstract: "In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders."
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
18.06.2025 11:48 β π 53 π 18 π¬ 2 π 2
At the #HEAL workshop, I'll present "Systematizing During Measurement Enables Broader Stakeholder Participation" on the ways we can further structure LLM evaluations and open them for deliberation. A project led by @hannawallach.bsky.social
25.04.2025 22:57 β π 3 π 1 π¬ 0 π 0
These results can serve to refine current AI regulations that touch upon "trust" **within the AI supply chain** and the "trustworthiness" of the resulting AI systems.
agathe-balayn.github.io/assets/pdf/b...
25.04.2025 22:57 β π 1 π 0 π¬ 1 π 0
At the main conference, I'll present our work "Unpacking Trust Dynamics in the LLM Supply Chain: An Empirical Exploration to Foster Trustworthy LLM Production And Use" (honorable mention) on how trust relations in the LLM supply chain affect the resulting AI system.
25.04.2025 22:57 β π 1 π 0 π¬ 1 π 0
At the #STAIG workshop, I'll discuss our empirical study of *pig farming* supply chains. π·
We show how inconspicuous software engineering practices might transform farming environments negatively, and how the harm-based approach to AI regulation might not enable to attend to these transformations.
25.04.2025 22:57 β π 0 π 0 π¬ 1 π 0
I will be at #CHI25 in person this week π―π΅
I'm looking forward to chat about **AI supply chains** from socio-technical & organizational / regulatory & governance / political economic lenses.
I'll present my work at the main conference (honorable mention), and attend the #HEAL and #STAIG workshops.
25.04.2025 22:57 β π 9 π 2 π¬ 1 π 0
PhD student in NLP+AI Ethics @ ILLC, University of Amsterdam
prev maths at Imperial College London, TUM
#NLP #NLProc
MIT media lab // researching fairness, equity, & pluralistic alignment in LLMs
previously @ mila / mcgill
i like language and dogs and plants and ultimate frisbee and baking and sunsets
https://elinorp-d.github.io
Info sci prof @ Drexel, trying to keep the machines (esp. RecSys & IR) from learning bigotry and discrimination. ADHDS9. Usually self-propelled. Opinions those of the Vulcan Science Academy. π°x2.
π‘ https://md.ekstrandom.net
π§ͺ https://inertial.science
Tech + discrimination researcher @AmnestyNL.
Likes/shares not endorsements; views my own
Here to find and tell tales relating to all aspects of technology enhanced learning; educational technology; #edtech; #highered; learning design; digital education; future, current and past. Working at the University of Galway. Mo thuairimΓ fΓ©in.
Responsible AI | AI Red Teaming at Microsoft
EdPsych Prof at UW-Madison
equitable human-centered AI for teaching and learning
himalayas, meditation, cat mom (she/her)
natural language processing, statistics, and computational medicine
PhD Candidate @Utrecht University - Methods & Statistics & @University Medical Center Utrecht - AI Methods
https://danadria.com
Forschung fΓΌr die vernetzte Gesellschaft \\ Research for the networked society
https://www.weizenbaum-institut.de/
QT doing QC (Questioning Clouds not Quantum Computing)
PhD Student in AI for Society at University of Pisa
Responsible NLP; XAI; Fairness; Abusive Language
Member of Privacy Network
she, her
martamarchiori.github.io
aspirational service top. scorpio sun, taurus rising, virgo moon.
HCI PhD @ Northwestern. I design human-centered AI tool to support science communication + study how AI tools might reshape peopleβs values, practices, relationships. she/her.
nishalsach.github.io
πΏπ¦ | Fairness in AI | University of Oxford | Deep Learning Indaba | Internships: Google DeepMind || Microsoft Research
πΌ IxD Professor, Aalto University
π§ͺ UX, HCI, spirituality researcher
ποΈ ACM distinguished speaker
π οΈ Ex-SDU, -Nokia, -Philips
π Alum TUe, UTEM
π£ Views my own
Research Engineer at New York University. Interested in dataset search & discovery, sketching, data management, nlp, and information retrieval.
PhD student at the University of Washington in social computing + human-AI interaction @socialfutureslab.bsky.social. π kjfeng.me
Professor of AI, society, media and democracy + Political Communication, U Amsterdam || Director of Digital Democracy Centre, SDU || AlgoSoc || AIMD || Mix of work and private stuff ||
π©π°π³π±πͺπΊππΌπ§ͺβοΈπΏπΆπ₯π¦