Not that I know of. But the method is relatively easy to implement. Please reach out if you would like to use it. I'm happy to assist!
08.07.2025 15:00 — 👍 1 🔁 0 💬 1 📌 0@gunnark.bsky.social
PostDoc @ Uni Tübingen explainable AI, causality gunnarkoenig.com
Not that I know of. But the method is relatively easy to implement. Please reach out if you would like to use it. I'm happy to assist!
08.07.2025 15:00 — 👍 1 🔁 0 💬 1 📌 0Sounds interesting? Have a look at our paper!
Joint work with Eric Günther and @ulrikeluxburg.bsky.social.
DIP is
✅ unique under mild assumptions,
✅ easy to interpret,
✅ entails an efficient estimation procedure,
✅ describes properties of the data (instead of just a specific model), and
✅ comes with a python implementation (github.com/gcskoenig/dipd).
In our recent AISTATS paper, we propose DIP, a novel mathematical decomposition of feature attribution scores that cleanly separates individual feature contributions and the contributions of interactions and dependencies.
07.07.2025 15:40 — 👍 0 🔁 0 💬 1 📌 0Dependencies are not only a neglected cooperative force but also complicate the definition and quantification of feature interactions. In particular, the contributions of interactions and dependencies may cancel each other out and must be disentangled to be fully revealed.
07.07.2025 15:39 — 👍 1 🔁 0 💬 1 📌 0For example, suppose we predict kidney function (Y) from creatinine (C) and muscle mass (M), and that C reflects Y but also M, which is not linked to Y. Here, M becomes useful once combined with C, as it allows us to subtract irrelevant variation from C. In other words, C&M cooperate via dependence!
07.07.2025 15:39 — 👍 1 🔁 0 💬 1 📌 0Determining whether variables are relevant due to cooperation is crucial, as variables that cooperate must be considered jointly to understand their relevance. Notably, features cooperate not only through interactions but also through statistical dependencies, which existing methods neglect.
07.07.2025 15:38 — 👍 1 🔁 0 💬 1 📌 0In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
07.07.2025 15:37 — 👍 15 🔁 3 💬 1 📌 2Feature importance measures can clarify or mislead. PFI, LOCO, and SAGE each answer a different question.
Understand how to pick the right tool and avoid spurious conclusions: mcml.ai/news/2025-03...
@fionaewald.bsky.social @ludwig-bothmann.bsky.social @giuseppe88.bsky.social @gunnark.bsky.social
Finally made it to bluesky as well ...
05.05.2025 08:58 — 👍 13 🔁 3 💬 2 📌 0And the video of Gunnar's talk is up on YouTube in case you missed it: youtu.be/7MrMjabTbuM
@gunnark.bsky.social
I recall you had an iPad -- why did you switch?
27.11.2024 13:52 — 👍 1 🔁 0 💬 1 📌 0A starter pack of people working on interpretability / explainability of all kinds, using theoretical and/or empirical approaches.
Reply or DM if you want to be added, and help me reach others!
go.bsky.app/DZv6TSS
Here's a fledgling starter pack for the AI community in Tübingen. Let me know if you'd like to be added!
go.bsky.app/NFbVzrA