#VISxAI IS BACK!! π€π
Submit your interactive βexplainablesβ and βexplorablesβ that visualize, interpret, and explain AI. #IEEEVIS
π Deadline: July 30, 2025
visxai.io
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
#VISxAI IS BACK!! π€π
Submit your interactive βexplainablesβ and βexplorablesβ that visualize, interpret, and explain AI. #IEEEVIS
π Deadline: July 30, 2025
visxai.io
Iβll be at #CHI2025 πΈ
If you are excited about interpretability and human-AI alignment β letβs chat!
And come see Abstraction Alignment β¬οΈ in the Explainable AI paper session on Monday at 4:20 JST
Check out Abstraction Alignment at #CHI2025!
πPaper: arxiv.org/abs/2407.12543
π»Demo: vis.mit.edu/abstraction-...
π₯Video: www.youtube.com/watch?v=cLi9...
πProject: vis.mit.edu/pubs/abstrac...
With Hyemin (Helen) Bang, @henstr.bsky.social, and @arvind.bsky.social
Abstraction Alignment reframes alignment around conceptual relationships, not just concepts.
It helps us audit models, datasets, and even human knowledge.
I'm excited to explore ways to π extract abstractions from models and π₯ align them to individual users' perspectives.
Abstraction Alignment works on datasets too!
Medical experts analyzed clinical dataset abstractions, uncovering issues like overuse of unspecified diagnoses.
This mirrors real-world updates to medical abstractions β showing how models can help us rethink human knowledge.
Two examples of Abstraction Alignment applied to a language model.
Language models often prefer specific answers even at the cost of performance.
But Abstraction Alignment reveals that the concepts an LM considers are often abstraction-aligned, even when itβs wrong.
This helps separate surface-level errors from deeper conceptual misalignment.
A screenshot of the Abstraction Alignment interface.
And we packaged Abstraction Alignment and its metrics into an interactive interface so YOU can explore it!
πhttps://vis.mit.edu/abstraction-alignment/
Aggregating Abstraction Alignment helps us understand a modelβs global behavior.
We developed metrics to support this:
βοΈ Abstraction match β most aligned concepts
π‘ Concept co-confusion β frequently confused concepts
πΊοΈ Subgraph preference β preference for abstraction levels
Abstraction Alignment compares model behavior to human abstractions.
By propagating the model's uncertainty through an abstraction graph, we can see how well it aligns with human knowledge.
E.g., confusing oaksπ³ with palmsπ΄ is more aligned than confusing oaksπ³ with sharksπ¦.
Interpretability identifies models' learned concepts (wheels π).
But human reasoning is built on abstractions β relationships between concepts that help us generalize (wheels πβ car π).
To measure alignment, we must test if models learn human-like concepts AND abstractions.
An overview of Abstraction Alignment, including its authors and links to the paper, demo, and code.
#CHI2025 paper on humanβAI alignment!π§΅
Models can learn the right concepts but still be wrong in how they relate them.
β¨Abstraction Alignmentβ¨evaluates whether models learn human-aligned conceptual relationships.
It reveals misalignments in LLMsπ¬ and medical datasetsπ₯.
π arxiv.org/abs/2407.12543
Hey Julian β thank you so much for putting this together! My research is on interpretability and Iβd love to be added.
24.11.2024 14:21 β π 6 π 0 π¬ 1 π 0