๐กWhat can we do to make self-explanations less ambiguous?
-> We propose to automatically adapt explanations to the task by stitching together SE-GNNs with white-box models and combining their explanations.
13.07.2025 17:29 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
- Self-explanations can be "unfaithful" by design
13.07.2025 17:29 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
- Models encoding different tasks can produce the same self-explanations, limiting the usefulness of explanations
13.07.2025 17:29 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Studying some popular models, we found that:
- The information that self-explanations convey can radically change based on the underlying task to be explained, which is, however, generally unknown
13.07.2025 17:29 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
๐ค What are the properties of self-explanations in GNNs? What can we expect from them?
We investigate this in our #ICML25 paper.
Come to have a chat at poster session 5, Thu 17 11 am.
w. Sagar Malhotra @andreapasspr.bsky.social @looselycorrect.bsky.social
13.07.2025 17:29 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Happening tomorrow!
Poster number 508
Saturday's session 10-12:30
25.04.2025 14:27 โ ๐ 0 ๐ 1 ๐ฌ 0 ๐ 0
3. ITS ROLE IN OOD GENERALISATION
Domain-Invariant GNNs make predictions over a domain-invariant subgraph to achieve OOD generalisation. We show that unless this subgraph is also *sufficient*, DIGNNs are not domain-invariant.
5/5
17.04.2025 13:43 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
2. HOW GNNs AIM TO ACHIEVE IT
We highlight several architectural design choices of Self-Explainable GNNs favoring information leakage from nodes outside the explanation, and propose mitigations.
4/5
17.04.2025 13:43 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
We propose rethinking faithfulness from three essential angles:
1. HOW TO COMPUTE IT
Many ways to compute faithfulness exists, but we show:
- they are not interchangeable
- some of them do not have the desired semantics
3/5
17.04.2025 13:43 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Paper: "Reconsidering Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs"
Link: openreview.net/forum?id=kiO...
Poster session: 26 April 10am
2/5
17.04.2025 13:43 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Faithfulness of GNN explanations isnโt one-size-fits-all๐งข
Our last @iclr-conf.bsky.social paper breaks it down across:
1. Evaluation metrics
2. Model implementations
3. OOD generalisation
w: Antonio L. @looselycorrect.bsky.social @andreapasserini.bsky.social
1/5
17.04.2025 13:43 โ ๐ 5 ๐ 3 ๐ฌ 1 ๐ 1
Kudos to the organisers for setting up the poster session in the fanciest room I've ever seen๐
08.12.2024 21:26 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
Hello World!
02.12.2024 09:45 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 0
Assistant Professor at the Department of Computer Science, University of Liverpool.
https://lutzoe.github.io/
PostDoc @ Uni Tรผbingen
explainable AI, causality
gunnarkoenig.com
Senior Researcher, Physics based AI/ML, Computational Mechanics
๐ฎ๐น Stats PhD @ University of Edinburgh ๐ด๓ ง๓ ข๓ ณ๓ ฃ๓ ด๓ ฟ
@ellis.eu PhD - visiting @avehtari.bsky.social ๐ซ๐ฎ
๐ค๐ญ Monte Carlo, UQ.
Interested in many things relating to UQ, keen to learn applications in climate/science.
https://www.branchini.fun/about
ELLIS PhD student at the University of Edinburgh
https://lenazellinger.github.io/
PhD student in machine learning at TU Wien.
Graphs | active learning | learning theory
https://maxthiessen.github.io
PhD student @TUDarmstadt AI&Ml Lab
Interested in #neuro #symbolic #visualreasoning #remotesensing
https://d-ochs.github.io
https://ml-research.github.io/people/dochs/index.html
PhD candidate at AI & ML lab @ TU Darmstadt (he/him). Research on deep learning, representation learning, neuro-symbolic AI, explainable AI, verifiable AI and interactive AI
๐งฌ๐ฆ Postdoc AMR/Microbial Genomics/Bioinformatics. Tracking AMR bacteria and its MGEs from a One-Health perspective. Passionate โก๏ธ phylogenomics, GDL, TDL, deep learning, graphs, plasmids, phages,... ๐ฆ๐ท in ๐จ๐ญ
PhD student @UniTrento | Computer vision hacker, unorganised thinker
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models
Web: sukrutrao.github.io
Research Scientist @ Samsung AI https://corneliocristina.github.io
Neuro-Symbolic AI, Neuro-Symbolic Applications, Open Information Extraction, AI4Science
Kansas State University. Neurosymbolic AI, Knowledge Graphs, Ontologies, Semantic Web. My opinions are my own. https://people.cs.ksu.edu/~hitzler/
Assistant Professor at Imperial College London | EEE Department and I-X.
Neuro-symbolic AI, Safe AI, Generative Models
Previously: Post-doc at TU Wien, DPhil at the University of Oxford.
Combining Bayesian and Neural approaches for Structured Data.
ComBayNS workshop @ IJCNN 2025 Conference, Rome, June 30-July 2 2025.
#probabilistic-ml #circuits #tensor-networks
PhD student @ University of Edinburgh
https://loreloc.github.io/
Machine Learning Researcher | Trying to unbox black-boxes.
#NLProc PhD Student & Research Associate at Bielefeld University
Working on: Question Answering over Linked Data, Semantic Web, Lexical Knowledge & Compositionality in AI
https://davidmschmidt.de