Angie Boggust's Avatar

Angie Boggust

@angieboggust.bsky.social

MIT PhD candidate in the VIS group working on interpretability and human-AI alignment

404 Followers  |  250 Following  |  11 Posts  |  Joined: 19.11.2024  |  2.07

Latest posts by angieboggust.bsky.social on Bluesky

Preview
Workshop on Visualization for AI Explainability The role of visualization in artificial intelligence (AI) gained significant attention in recent years. With the growing complexity of AI models, the critical need for understanding their inner-workin...

#VISxAI IS BACK!! πŸ€–πŸ“Š

Submit your interactive β€œexplainables” and β€œexplorables” that visualize, interpret, and explain AI. #IEEEVIS

πŸ“† Deadline: July 30, 2025

visxai.io

07.05.2025 21:56 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

I’ll be at #CHI2025 🌸

If you are excited about interpretability and human-AI alignment β€” let’s chat!

And come see Abstraction Alignment ⬇️ in the Explainable AI paper session on Monday at 4:20 JST

24.04.2025 13:05 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships While interpretability methods identify a model's learned concepts, they overlook the relationships between concepts that make up its abstractions and inform its ability to generalize to new data. To ...

Check out Abstraction Alignment at #CHI2025!

πŸ“„Paper: arxiv.org/abs/2407.12543
πŸ’»Demo: vis.mit.edu/abstraction-...
πŸŽ₯Video: www.youtube.com/watch?v=cLi9...
πŸ”—Project: vis.mit.edu/pubs/abstrac...

With Hyemin (Helen) Bang, @henstr.bsky.social, and @arvind.bsky.social

14.04.2025 15:48 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Abstraction Alignment reframes alignment around conceptual relationships, not just concepts.

It helps us audit models, datasets, and even human knowledge.

I'm excited to explore ways to πŸ— extract abstractions from models and πŸ‘₯ align them to individual users' perspectives.

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Abstraction Alignment works on datasets too!

Medical experts analyzed clinical dataset abstractions, uncovering issues like overuse of unspecified diagnoses.

This mirrors real-world updates to medical abstractions β€” showing how models can help us rethink human knowledge.

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Two examples of Abstraction Alignment applied to a language model.

Two examples of Abstraction Alignment applied to a language model.

Language models often prefer specific answers even at the cost of performance.

But Abstraction Alignment reveals that the concepts an LM considers are often abstraction-aligned, even when it’s wrong.

This helps separate surface-level errors from deeper conceptual misalignment.

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A screenshot of the Abstraction Alignment interface.

A screenshot of the Abstraction Alignment interface.

And we packaged Abstraction Alignment and its metrics into an interactive interface so YOU can explore it!

πŸ”—https://vis.mit.edu/abstraction-alignment/

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Aggregating Abstraction Alignment helps us understand a model’s global behavior.

We developed metrics to support this:
↔️ Abstraction match – most aligned concepts
πŸ’‘ Concept co-confusion – frequently confused concepts
πŸ—ΊοΈ Subgraph preference – preference for abstraction levels

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Abstraction Alignment compares model behavior to human abstractions.

By propagating the model's uncertainty through an abstraction graph, we can see how well it aligns with human knowledge.

E.g., confusing oaks🌳 with palms🌴 is more aligned than confusing oaks🌳 with sharks🦈.

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Interpretability identifies models' learned concepts (wheels πŸ›ž).

But human reasoning is built on abstractions β€” relationships between concepts that help us generalize (wheels πŸ›žβ†’ car πŸš—).

To measure alignment, we must test if models learn human-like concepts AND abstractions.

14.04.2025 15:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
An overview of Abstraction Alignment, including its authors and links to the paper, demo, and code.

An overview of Abstraction Alignment, including its authors and links to the paper, demo, and code.

#CHI2025 paper on human–AI alignment!🧡

Models can learn the right concepts but still be wrong in how they relate them.

✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships.

It reveals misalignments in LLMsπŸ’¬ and medical datasetsπŸ₯.

πŸ”— arxiv.org/abs/2407.12543

14.04.2025 15:48 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 2

Hey Julian β€” thank you so much for putting this together! My research is on interpretability and I’d love to be added.

24.11.2024 14:21 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@angieboggust is following 19 prominent accounts