Happy to be at the #XAI World Conference 2025 in Istanbul. I will be presenting our work ๐"EvalxNLP: A Framework for Benchmarking Post-Hoc Explainability Methods on NLP Models" in the poster sessions tomorrow, so feel free to pass by to talk more about the usability of explainability methods!
09.07.2025 21:32 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
In this paper, we show that widely used post-hoc feature attribution methods exhibit significant gender disparity with respect to their faithfulness, robustness, and complexity.
This work was done with Ege Erdogan, @nfel.bsky.social , and Gjergji Kasneci.
26.06.2025 05:25 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Very happy to be at #FAccT2025 in Athens, where I presented our work "Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods"
๐Paper: dl.acm.org/doi/10.1145/...
At #FAccT2025? Let's connect if you're interested in improving the usability of explainability methods!
26.06.2025 05:25 โ ๐ 9 ๐ 1 ๐ฌ 1 ๐ 0
Widening NLP (WiNLP) aims to elevate underrepresented voices in #NLProc. We care about #diversity and #inclusion. #EMNLP2025
Professor for AI at Hasso Plattner Institute and University of Potsdam
Berlin (prev. Rutgers NJ USA, Tsinghua Beijing, Berkeley)
http://gerard.demelo.org
#NLProc PhD student in #Edinburgh ๐ด๓ ง๓ ข๓ ณ๓ ฃ๓ ด๓ ฟ Incoming postdoc at โช#Milaโฌ ๐จ๐ฆ interpretability x memorisation x (non-)compositionality. she/her ๐ฉโ๐ป ๐ณ๐ฑ
The 2025 Conference on Language Modeling will take place at the Palais des Congrรจs in Montreal, Canada from October 7-10, 2025
Assistant Prof of CS at the University of Waterloo, Faculty and Canada CIFAR AI Chair at the Vector Institute. Joining NYU Courant in September 2026. Co-EiC of TMLR. My group is The Salon. Privacy, robustness, machine learning.
http://www.gautamkamath.com
Machine Learning and Security,
Professor of Computer Science at TU Berlin,
Software researcher at https://cispa.de, working on #Fandango, #S3, #FuzzingBook, #DebuggingBook. Testing, debugging, analyzing, and protecting software for a better world. Find me at https://andreas-zeller.info/
The CISPA Helmholtz Center for Information Security is a German national Big Science Institution within the Helmholtz Association. We research information security in all its facets.
Deutschlands grรถรte Wissenschaftsorganisation
Webseite: https://www.helmholtz.de
Mastodon: https://helmholtz.social/@helmholtz
Impressum: https://www.helmholtz.de/socialmedia
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, and the SERI program | Focusing on interpretable, safe, and ethical AI decision-making.
Computer Science -- Robotics (cs.RO)
source: export.arxiv.org/rss/cs.RO
maintainer: @tmaehara.bsky.social
ML/AI Robustness in Health @MIT
Postdoc in Gryn'ova group @ UoB | UMN Chem PhD 2024 | AI and machine learning in chemistry
PhD Student in AI for Society at University of Pisa
Responsible NLP; XAI; Fairness; Abusive Language
Member of Privacy Network
she, her
martamarchiori.github.io
information science professor (tech ethics + internet stuff)
kind of a content creator (elsewhere also @professorcasey)
though not influencing anyone to do anything except maybe learn things
she/her
more: casey.prof
AI professor. Director, Foundations of Cooperative AI Lab at Carnegie Mellon. Head of Technical AI Engagement, Institute for Ethics in AI (Oxford). Author, "Moral AI - And How We Get There."
https://www.cs.cmu.edu/~conitzer/
Human-centered research, trustworthy data analytics in safety-critical applications, explainable ML, privacy-aware algorithms.
More about us: https://rc-trust.ai/
๐ ๏ธ Actionable Interpretability๐ @icmlconf.bsky.social 2025 | Bridging the gap between insights and actions โจ https://actionable-interpretability.github.io
Nordic AI Research, Education, and Innovation Partnership
CADIA โข NORA โข WASP โข P1 โข FCAI
http://nordicpartnership.ai
The Thirty-Eighth Annual Conference on Neural Information Processing Systems will be held in Vancouver Convention Center, on Tuesday, Dec 10 through Sunday, Dec 15.
https://neurips.cc/