Theory of XAI Workshop, Dec 2, 2025
Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...
๐จ Workshop on the Theory of Explainable Machine Learning
Call for โค2 page extended abstract submissions by October 15 now open!
๐ Ellis UnConference in Copenhagen
๐
Dec. 2
๐ More info: sites.google.com/view/theory-...
@gunnark.bsky.social @ulrikeluxburg.bsky.social @emmanuelesposito.bsky.social
30.09.2025 14:00 โ ๐ 7 ๐ 4 ๐ฌ 0 ๐ 0
I am hiring PhD students and/or Postdocs, to work on the theory of explainable machine learning. Please apply through Ellis or IMPRS, deadlines end october/mid november. In particular: Women, where are you? Our community needs you!!!
imprs.is.mpg.de/application
ellis.eu/news/ellis-p...
17.09.2025 06:17 โ ๐ 23 ๐ 15 ๐ฌ 0 ๐ 0
We need new rules for publishing AI-generated research. The teams developing automated AI scientists have customarily submitted their papers to standard refereed venues (journals and conferences) and to arXiv. Often, acceptance has been treated as the dependent variable. 1/
14.09.2025 17:15 โ ๐ 73 ๐ 23 ๐ฌ 4 ๐ 4
Center for the Alignment of AI Alignment Centers
We align the aligners
This new center strikes the right tone in approaching the AI alignment problem. alignmentalignment.ai
11.09.2025 20:47 โ ๐ 58 ๐ 14 ๐ฌ 4 ๐ 4
Pascual Restrepo - Research
Pascual Restrepo Official Website. Economist, MIT.
I dont know if it's a good point to start, but you might want to take a look at the works by Daron Acemoglu and Pascual Restrepo pascual.scripts.mit.edu/research/
03.09.2025 21:41 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
YouTube video by Friday Talks Tรผbingen
How much can we forget about Data Contamination? - [Sebastian Bordt]
A new recording of our FridayTalks@Tรผbingen series is online!
How much can we forget about Data Contamination?
by
@sbordt.bsky.social
Watch here: youtu.be/T9Y5-rngOLg
29.08.2025 07:05 โ ๐ 2 ๐ 1 ๐ฌ 1 ๐ 0
I see the point of the original post, but I think it's also important to keep in mind this other aspect.
20.07.2025 21:13 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
www.inference.vc/we-may-be-su...
20.07.2025 20:54 โ ๐ 7 ๐ 1 ๐ฌ 1 ๐ 0
The stochastic parrot is now an IMO gold medalist parrot
19.07.2025 20:50 โ ๐ 57 ๐ 7 ๐ฌ 2 ๐ 2
Wednesday: Position: Rethinking Explainable Machine Learning as Applied Statistics icml.cc/virtual/2025...
14.07.2025 14:49 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
I'm at #ICML in Vancouver this week, hit me up if you want to chat about pre-training experiments or explainable machine learning.
You can find me at these posters:
Tuesday: How Much Can We Forget about Data Contamination? icml.cc/virtual/2025...
14.07.2025 14:49 โ ๐ 1 ๐ 1 ๐ฌ 1 ๐ 0
Great to hear that you like it, and thank you for the feedback! I agree that stakeholders are important, although you are not going to find much about it in this paper. We might argue, though, that similar aspects with stakeholders arise in data science with large datasets, hence the analogy :)
11.07.2025 22:39 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Our #ICML position paper: #XAI is similar to applied statistics: it uses summary statistics in an attempt to answer real world questions. But authors need to state how concretely (!) their XAI statistics contributes to answer which concrete (!) question!
arxiv.org/abs/2402.02870
11.07.2025 07:35 โ ๐ 6 ๐ 2 ๐ฌ 0 ๐ 0
There are many more interesting aspects to this, so take a look at our paper!
arxiv.org/abs/2402.02870
We would also be happy for questions and comments on why we got it completely wrong.๐
If you are at ICML, I will present this paper on Wed 16 Jul 4:30 in the East Exhibition Hall A-B #E-501.๐
10.07.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
We think the literature on explainable machine learning can learn a lot from looking at these papers!๐
10.07.2025 18:02 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
As I learned from our helpful ICML reviewers, there is a lot of existing research at the intersection of machine learning and statistics that takes the matter of interpretation quite seriously.
10.07.2025 18:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
In this framework, another way to formulate the initial problems is: For many popular explanation algorithms, it is not clear whether they have an interpretation.
10.07.2025 18:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Having an interpretation means that the explanation formalizes an intuitive human concept, which is a fancy philosophical way of saying that it is clear what aspect of the function the explanation describes.๐ง
10.07.2025 18:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
In addition, the way to develop explanations that are useful "in the world" is to develop explanations that have an interpretation.
10.07.2025 18:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
This has several important implications. Most importantly, explainable machine learning has often been trying to reinvent the wheel when we already have a robust framework for discussing complex objects in the light of pressing real-world questions.
10.07.2025 18:00 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
It took us a while to recognize it, but once you see it, you can't unsee it: Explainable machine learning is applied statistics for learned functions.โจ
10.07.2025 18:00 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Concretely, researchers in applied statistics study complex datasets by mapping their most important properties into low-dimensional structures. Now think:
Machine learning model ~ Large dataset
Explanation algorithm ~ Summary statistics, visualization
10.07.2025 18:00 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Here comes our key realization: This question has occurred in other disciplines before, specifically in applied statistics research.
10.07.2025 18:00 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
So, how can we seriously discuss whether an explanation algorithm can be used to answer relevant questions about our trained model or the world?๐
10.07.2025 17:59 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
I have actually encountered this point in my own research before, where we did a detailed mathematical analysis of SHAP, but all the math could not reveal the right way to use the explanations in practice (arxiv.org/abs/2209.040...
10.07.2025 17:59 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
And then they never make clear how the technical properties of the algorithm tie back to the grander goals of explainability research (meaning: how do we actually use the algorithm in practice?)๐ฏ
10.07.2025 17:59 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
The problem: Papers about explanation algorithms often start with grand promises, but then quickly dive into technical details.
10.07.2025 17:59 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
During the last couple of years, we have read a lot of papers on explainability and often felt that something was fundamentally missing๐ค
This led us to write a position paper (accepted at #ICML2025) that attempts to identify the problem and to propose a solution.
arxiv.org/abs/2402.02870
๐๐งต
10.07.2025 17:58 โ ๐ 12 ๐ 5 ๐ฌ 1 ๐ 1
joint work with Suraj Srinivas, Valentyn Boreiko, and @ulrikeluxburg.bsky.social
Link to paper: arxiv.org/pdf/2410.03249
Link to code: github.com/tml-tuebinge...
We would be happy for feedback or questions about this work!
08.07.2025 06:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Prof at EPFL
AI โข Climbing
AI technical gov & risk management research. PhD student @MIT_CSAIL, fmr. UK AISI. I'm on the CS faculty job market! https://stephencasper.com/
AI x neuroscience.
๐ www.rdgao.com
Research group leader @ Max Planck Institute working on theory & social aspect of CS. Previous @UCSC@GoogleDeepMind @Stanford @PKU1898
https://yatongchen.github.io/
Postdoc @ai2.bsky.social & @uwnlp.bsky.social
Faculty at โชthe ELLIS Institute Tรผbingen and Max Planck Institute for Intelligent Systems. Leading the AI Safety and Alignment group. PhD from EPFL supported by Google & OpenPhil PhD fellowships.
More details: https://www.andriushchenko.me/
PhD Student in Machine Learning @unituebingen.bsky.social, @ml4science.bsky.social, @tuebingen-ai.bsky.social, IMPRS-IS; previously intern @vectorinstitute.ai; jzenn.github.io
๐ PhD student at the Max Planck Institute for Intelligent Systems
๐ฌ Safe and robust AI, algorithms and society
๐ https://andrefcruz.github.io
๐ researcher in ๐ฉ๐ช, from ๐ต๐น
Incoming faculty at the Max Planck Institute for Software Systems
Postdoc at UW, working on Natural Language Processing
Recruiting PhD students!
๐ https://lasharavichander.github.io/
Researcher in machine learning
Trinity College Dublinโs Artificial Intelligence Accountability Lab (https://aial.ie/) is founded & led by Dr Abeba Birhane. The lab studies AI technologies & their downstream societal impact with the aim of fostering a greater ecology of AI accountability
Biomedical Computer Vision & Language Models, scads.ai / Leipzig University, also NFDI4BIOIMAGE, NEUBIAS/GloBIAS, GPUs, AI, ML ๐ฌ๐ฅ๏ธ๐ views:mine
ML, ฮป โข language and the machines that understand it โข https://ocramz.github.io
Assistant Professor @westernu.ca, Faculty Affiliate @vectorinstitute.ai. Probabilistic machine learning, decision-making, AI4Science. Bayesian + frequentist, etc!
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.socialโฌ (opinions my own). Author of Rising Tide on substack: helentoner.substack.com
PhD @Stanford working w Noah Goodman
Studying in-context learning and reasoning in humans and machines
Prev. @UofT CS & Psych
Foundations of AI. I like simple and minimal examples and creative ideas. I also like thinking about the next token ๐งฎ๐งธ
Google Research | PhD, CMU |
https://arxiv.org/abs/2504.15266 | https://arxiv.org/abs/2403.06963
vaishnavh.github.io
A latent space odyssey
gracekind.net
PhD Student in Tรผbingen (MPI-IS & Uni Tรผ), interested in reinforcement learning. Freedom is a pure idea. https://onnoeberhard.com/