π§ To reason over text and track entities, we find that language models use three types of 'pointers'!
They were thought to rely only on a positional oneβbut when many entities appear, that system breaks down.
Our new paper shows what these pointers are and how they interact π
08.10.2025 14:56 β π 4 π 1 π¬ 1 π 0
π¨ New Paper π¨
How effectively do reasoning models reevaluate their thought? We find that:
- Models excel at identifying unhelpful thoughts but struggle to recover from them
- Smaller models can be more robust
- Self-reevaluation ability is far from true meta-cognitive awareness
1/N π§΅
13.06.2025 16:15 β π 12 π 3 π¬ 1 π 0
New Paper Alert! Can we precisely erase conceptual knowledge from LLM parameters?
Most methods are shallow, coarse, or overreach, adversely affecting related or general knowledge.
We introduceπͺππππππ β a general framework for Precise In-parameter Concept EraSure. π§΅ 1/
29.05.2025 16:22 β π 5 π 2 π¬ 1 π 0
Checkout Benno's notes about our impact of interpretability paper π.
Also, we are organizing a workshop at #ICML2025 which is inspired by some of the questions discussed in the paper: actionable-interpretability.github.io
15.04.2025 23:11 β π 11 π 3 π¬ 0 π 0
Have work on the actionable impact of interpretability findings? Consider submitting to our Actionable Interpretability workshop at ICML! See below for more info.
Website: actionable-interpretability.github.io
Deadline: May 9
03.04.2025 17:58 β π 20 π 10 π¬ 0 π 0
Forgot to tag the one and only @hadasorgad.bsky.social !!!
31.03.2025 17:39 β π 2 π 0 π¬ 0 π 0
π Our Actionable Interpretability workshop has been accepted to #ICML2025! π
> Follow @actinterp.bsky.social
> Website actionable-interpretability.github.io
@talhaklay.bsky.social @anja.re @mariusmosbach.bsky.social @sarah-nlp.bsky.social @iftenney.bsky.social
Paper submission deadline: May 9th!
31.03.2025 16:59 β π 43 π 16 π¬ 3 π 3
Communication between LLM agents can be super noisy! One rogue agent can easily drag the whole system into failure π±
We find that (1) it's possible to detect rogue agents early on
(2) interventions can boost system performance by up to 20%!
Thread with details and paper link below!
13.02.2025 14:30 β π 4 π 0 π¬ 0 π 0
In a final experiment, we show that output-centric methods can be used to "revive" features previously thought to be "dead" π§ββοΈ reviving hundreds of SAE features in Gemma 2! 6/
28.01.2025 19:38 β π 0 π 0 π¬ 1 π 0
In a final experiment, we show that output-centric methods can be used to "revive" features previously thought to be "dead" π§ββοΈ reviving hundreds of SAE features in Gemma 2! 6/
28.01.2025 19:38 β π 0 π 0 π¬ 0 π 0
Unsurprisingly, while activating inputs better describe what activates a feature, output-centric methods do much better at predicting how steering the feature will affect the modelβs output!
But combining the two works best! π 5/
28.01.2025 19:37 β π 0 π 0 π¬ 2 π 0
Next, we evaluate the widely-used activating inputs approach versus two output-centric methods:
- vocabulary projection (a.k.a logit lens)
- tokens with max probability change in the output
Our output-centric methods require no more than a few inference passes! 4/
28.01.2025 19:36 β π 0 π 0 π¬ 1 π 0
To fix this, we first propose using both input- and output-based evaluations for feature descriptions.
Our output-based eval measures how well a description of a feature captures its effect on the model's generation. 3/
28.01.2025 19:36 β π 0 π 0 π¬ 1 π 0
Autointerp pipelines describe neurons and SAE features based on inputs that activate them.
This is problematic β οΈ
1. Collecting activations for large data is expensive, time-consuming, and often unfeasible.
2. It overlooks how features affect model outputs!
2/
28.01.2025 19:35 β π 0 π 0 π¬ 1 π 0
How can we interpret LLM features at scale? π€
Current pipelines use activating inputs, which is costly and ignores how features causally affect model outputs!
We propose efficient output-centric methods that better predict the steering effect of a feature.
New preprint led by @yoav.ml π§΅1/
28.01.2025 19:34 β π 33 π 4 π¬ 1 π 0
π¨ New Paper Alert: Open Problem in Machine Unlearning for AI Safety π¨
Can AI truly "forget"? While unlearning promises data removal, controlling emergent capabilities is a inherent challenge. Here's why it matters: π
Paper: arxiv.org/pdf/2501.04952
1/8
10.01.2025 16:58 β π 25 π 6 π¬ 1 π 3
Most operation descriptions are plausible based on human judgment.
We also observe interesting operations implemented by heads, like the extension of time periods (day β month β year) and association of known figures with years relevant to their historical significance (9/10)
18.12.2024 18:01 β π 1 π 0 π¬ 1 π 0
Next, we establish an automatic pipeline that uses GPT-4o to annotate the salient mappings from MAPS.
We map the attention heads of Pythia 6.9B and GPT2-xl and manage to identify operations for most heads, reaching 60%-96% in the middle and upper layers (8/10)
18.12.2024 18:00 β π 2 π 0 π¬ 1 π 0
(3) Smaller models tend to encode higher numbers of relations in a single head
(4) In Llama-3.1 models, which use grouped-query attention, grouped heads often implement the same or similar relations (7/10)
18.12.2024 17:59 β π 2 π 0 π¬ 1 π 0
(1) Different models encode certain relations across attention heads to similar degrees
(2) Different heads implement the same relation to varying degrees, which has implications for localization and editing of LLMs (6/10)
18.12.2024 17:58 β π 3 π 0 π¬ 1 π 0
Using MAPS, we study the distribution of operations across heads in different models -- Llama, Pythia, Phi, GPT2 -- and see some cool trends of function encoding universality and architecture biases: (5/10)
18.12.2024 17:58 β π 0 π 0 π¬ 1 π 0
Experiments on 20 operations and 6 LLMs show that MAPS estimations strongly correlate with the headβs outputs during inference
Ablating heads implementing an operation damages the modelβs ability to perform tasks requiring the operation compared to removing other heads (4/10)
18.12.2024 17:57 β π 3 π 0 π¬ 1 π 0
MAPS infers the headβs functionality by examining different groups of mappings:
(A) Predefined relations: groups expressing certain relations (e.g. city of a country)
(B) Salient operations: groups for which the head induces the most prominent effect (3/10)
18.12.2024 17:57 β π 1 π 0 π¬ 1 π 0
Previous works that analyze attention heads mostly focused on studying their attention patterns or outputs for certain tasks or circuits.
Here, we take a different approach, inspired by @anthropic.com @guydar.bsky.social , and inspect the head in the vocabulary space π (2/10)
18.12.2024 17:56 β π 2 π 0 π¬ 1 π 0
What's in an attention head? π€―
We present an efficient framework β MAPS β for inferring the functionality of attention heads in LLMs β¨directly from their parametersβ¨
A new preprint with Amit Elhelo π§΅ (1/10)
18.12.2024 17:55 β π 62 π 13 π¬ 1 π 0
Volunteer to join ACL 2025 Programme Committee
Use this form to express your interest in joining the ACL 2025 programme committee as a reviewer or area chair (AC). The review period is 1st to 20th of March 2025. ACs need to be available for variou...
We invite nominations to join the ACL2025 PC as reviewer or area chair(AC). Review process through ARR Feb cycle. Tentative timeline: Review 1-20 Mar 2025, Rebuttal is 26-31 Mar 2025. ACs must be available throughout the Feb cycle. Nominations by 20 Dec 2024:
shorturl.at/TaUh9 #NLProc #ACL2025NLP
16.12.2024 00:28 β π 12 π 12 π¬ 0 π 1
Stanford Professor of Linguistics and, by courtesy, of Computer Science, and member of @stanfordnlp.bsky.social and The Stanford AI Lab. He/Him/His. https://web.stanford.edu/~cgpotts/
First Workshop on Large Language Model Memorization.
Visit our website at https://sites.google.com/view/memorization-workshop/
Prof, Chair for AI & Computational Linguistics,
Head of MaiNLP lab @mainlp.bsky.social, LMU Munich
Co-director CIS @cislmu.bsky.social
Visiting Prof ITU Copenhagen @itu.dk
ELLIS Fellow @ellis.eu
Vice-President ACL
PI MCML @munichcenterml.bsky.social
Assistant professor at Yale Linguistics. Studying computational linguistics, cognitive science, and AI. He/him.
Let's build AI's we can trust!
PhD student at Brown University working on interpretability. Prev. at Ai2, Google
Helping machines make sense of the world. Asst Prof @icepfl.bsky.social; Before: @stanfordnlp.bsky.social @uwnlp.bsky.social AI2 #NLProc #AI
Website: https://atcbosselut.github.io/
Recently a principal scientist at Google DeepMind. Joining Anthropic. Most (in)famous for inventing diffusion models. AI + physics + neuroscience + dynamical systems.
Human/AI interaction. ML interpretability. Visualization as design, science, art. Professor at Harvard, and part-time at Google DeepMind.
https://generalstrikeus.com/
PhD student in computational linguistics at UPF
chengemily1.github.io
Previously: MIT CSAIL, ENS Paris
Barcelona
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill
Building robust LLMs @Cohere
Machine learning, interpretability, visualization, Language Models, People+AI research
Epigenetic Inheritance, Neuroscience & anything biology-related
https://www.odedrechavilab.com/
https://www.qedscience.com
Organizer of βThe Woodstock of Biologyβ
TED: https://shorturl.at/myFTY
Huberman Lab Podcast: https://youtu.be/CDUetQMKM6g
Associate Professor of Machine Learning, University of Oxford;
OATML Group Leader;
Director of Research at the UK government's AI Safety Institute (formerly UK Taskforce on Frontier AI)
PhD student/research scientist intern at UCL NLP/Google DeepMind (50/50 split). Previously MS at KAIST AI and research engineer at Naver Clova. #NLP #ML π https://soheeyang.github.io/