David Bau's Avatar

David Bau

@davidbau.bsky.social

Interpretable Deep Networks. http://baulab.info/ @davidbau

2,193 Followers  |  242 Following  |  155 Posts  |  Joined: 16.10.2023  |  2.3499

Latest posts by davidbau.bsky.social on Bluesky

The Doge of Venice visits a Murano glassworks in the 17th century. I will talk about why glassmaking in this era has some similarities to AI research today.

The Doge of Venice visits a Murano glassworks in the 17th century. I will talk about why glassmaking in this era has some similarities to AI research today.

At the #Neurips2025 mechanistic interpretability workshop I gave a brief talk about Venetian glassmaking, since I think we face a similar moment in AI research today.

Here is a blog post summarizing the talk:

davidbau.com/archives/202...

11.12.2025 15:02 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 2    πŸ“Œ 2
Preview
Arnab Sen Sharma (@arnabsensharma.bsky.social) Thanks to my collaborators Giordano Rogers, @natalieshapira.bsky.social, and @davidbau.bsky.social . Checkout our paper for more details: πŸ“œ arxiv.org/pdf/2510.26784 πŸ’» https://github.com/arnab-api/filter 🌐 filter.baulab.info https://arxiv.org/pdf/2510.26784

Paper, code, website; Please help reshare Arnab's bsky thread:

bsky.app/profile/arn...

06.11.2025 14:00 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

When you read the paper, be sure to check out the appendix where @arnab_api discusses how pointer and value data are entangled in filters.

And possible applications of the filter mechanism, like as a zero-shot "lie detector" that can flag incorrect statements in ordinary text.

06.11.2025 14:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Arnab Sen Sharma (@arnabsensharma.bsky.social) When the question is presented *after* the options, filter heads can achieve high causality scores across language and format changes! This suggests that the encoded predicate is robust against such perturbations.

Curiously, when the question precedes the list of candidates, there is an abstract predicate for "this is the answer I am looking for," that that tags items in a list as soon as they are seen.
bsky.app/profile/arn...

06.11.2025 14:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The neural representations for LLM filter heads are language independent!

If we pick up the representation for a question in French, it will accurately match items expressed in the Thai language.

06.11.2025 14:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Arnab Sen Sharma (@arnabsensharma.bsky.social) πŸ” In Llama-70B and Gemma-27B, we found special attention heads that consistently focus their attention on the filtered items. This behavior seems consistent across a range of different formats and semantic types.

Arnab calls predicate attention heads "filter heads" because the same heads filter many properties across objects, people, and landmarks.

The generic structure resembles functional programming's "filter" function, with a common mechanism handling a wide range of predicates.
bsky.app/profile/arn...

06.11.2025 14:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Arnab Sen Sharma (@arnabsensharma.bsky.social) How can a language model find the veggies in a menu? New pre-print where we investigate the internal mechanisms of LLMs when filtering on a list of options. Spoiler: turns out LLMs use strategies surprisingly similar to functional programming (think "filter" from python)! 🧡

The secret life of an LM is defined by its internal data types. Inner layers transport abstractions that are more robust than words, like concepts, functions, or pointers.

In new work yesterday, @arnabsensharma.bsky.social et al identify a data type for *predicates*.

bsky.app/profile/arn...

06.11.2025 14:00 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 1    πŸ“Œ 2
Post image

How embarrassing for me and confusing to the LLM!

OK, here it is fixed. Nice thing about workbench is that it just takes a second to edit the prompt, and you can see how the LLM responds, now deciding very early it should be ':'

11.10.2025 14:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

... @wendlerc.bsky.social and @sfeucht.bsky.social ....

11.10.2025 12:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
NDIF Team (@ndif-team.bsky.social) This is a public beta, so we expect bugs and actively want your feedback: https://forms.gle/WsxmZikeLNw34LBV9

Help me thank the NDIF team for rolling out workbench.ndif.us/ by using it to make your own discoveries inside LLM internals. We should all be looking inside our LLMs.

Share the tool! Share what you find!

And send the team feedback -
bsky.app/profile/ndi...

11.10.2025 12:02 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

That process was noticed by @wendlerch in arxiv.org/abs/2402.10588 and studied by @sheridan_feucht in dualroute.baulab.info

Try it out yourself on workbench.ndif.us/.

Does it work with other words? Can you find interesting exceptions? How about prompts beyond translation?

11.10.2025 12:02 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

The lens reveals: the model does NOT go directly from amore to "amor" or "amour" by just dropping or adding letters!

Instead it first "thinks" about the (English) word "love".

In other words: LLMs translate using *concepts*, not tokens.

11.10.2025 12:02 β€” πŸ‘ 33    πŸ” 5    πŸ’¬ 3    πŸ“Œ 0
Post image

Enter a translation prompt: "Italiano: amore, EspaΓ±ol: amor, FranΓ§ois:".

The workbench doesn't just show you the model's output. It shows the grid of internal states that lead to the output. Researchers call this visualization the "logit lens".

11.10.2025 12:02 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
NDIF Team (@ndif-team.bsky.social) Ever wished you could explore what's happening inside a 405B parameter model without writing any code? Workbench, our AI interpretability interface, is now live for public beta at workbench.ndif.us!

But why theorize? We can actually look at what it does.

Visit the NDIF workbench here: workbench.ndif.us/, and pull up any LLM that can translate, like GPT-J-6b. If you register an account you can access larger models.

bsky.app/profile/ndi...

11.10.2025 12:02 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What does an LLM do when it translates from Italian "amore" to Spanish "amor" or French "amour"?

That's easy! (you might think) Because surely it knows: amore, amor, amour are all based on the same Latin word. It can just drop the "e", or add a "u".

11.10.2025 12:02 β€” πŸ‘ 37    πŸ” 4    πŸ’¬ 2    πŸ“Œ 1

Looking forward to #COLM2025 tomorrow. DM me if you'll also be there and want to meet to chat.

06.10.2025 12:10 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
NDIF Team We're a research computing project cracking open the mysteries inside large-scale AI systems. The NSF National Deep Inference Fabric consists of a unique combination of hardware and software that pr...

And kudos to @ndif-team.bsky.social for keeping up with weekly youtube video posts on AI interpretability!

www.youtube.com/@NDIFTeam

03.10.2025 18:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There are a lot of interesting details that surface when you use SAEs to understand and control diffusion image synthesis models. Learn more in @wendlerc.bsky.social's talk.

03.10.2025 18:52 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
David Bau on How Artificial Intelligence Works Yascha Mounk and David Bau delve into the β€œblack box” of AI.

On the Good Fight podcast w substack.com/@yaschamounk I give a quick but careful primer on how modern AI works.

I also chat about our responsibility as machine learning scientists, and what we need to fix to get AI right.

Take a listen and reshare -

www.persuasion.community/p/david-bau

03.10.2025 08:58 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

I love the 'opinionated' approach taken by Aaron + team in this survey. It captures the ongoing work around the central casual puzzles we face in mechanistic interpretability.

01.10.2025 14:25 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks @kmahowald.bsky.social!

bsky.app/profile/kmah...

28.09.2025 00:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The Dual-Route Model of Induction Do LLMs copy meaningful text by rote or by understanding meaning? Webpage for The Dual-Route Model of Induction (Feucht et al., 2025).

Read more at arxiv.org/abs/2504.03022 <- at COLM

footprints.baulab.info <- token context erasure
arithmetic.baulab.info <- concept parallelograms
dualroute.baulab.info <- the second induction route,
w a neat colab notebook.

@ericwtodd.bsky.social @byron.bsky.social @diatkinson.bsky.social

27.09.2025 20:54 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The takeaway for me: LLMs separate their token processing from their conceptual processing. Akin to humans' dual route processing of speech.

We need to be aware when an LM is thinking about tokens or concepts.

They do both, and it makes a difference which way it's thinking.

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

If token-processing and concept-processing are largely separate, does killing one kill the other? Chris Olah's team in Olsson 2022 hypothesized that ICL emerges from token induction.

@keremsahin22.bsky.social + Sheridan are finding cool ways to look into Olah's induction hypothesis too!

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The representation space within the concept induction heads also has a more "meaningful" geometry than the transformer as a whole.

Sheridan discovered (Neurips mechint 2025) that semantic vector arithmetic works better in this space. (Token semantics work in tokenspace.)

arithmetic.baulab.info/

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

If you disable token induction heads and ask the model to copy text with only the concept induction heads, it will NOT copy exactly. It will paraphrase the text.

That happens even for computer code. They copy the BEHAVIOR of the code, but write it in a totally different way!

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

An amazing thing about the "concepts" in this 2nd route: they are *not* literal words. They are totally language-independent.

If the target context is in Chinese, they will copy the concept into Chinese. Or patch them between runs to get Italian. They mediate translation.

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This second set of text-copying attention heads also shows up in every LLM we tested, and these heads work in a totally different way from token induction heads.

Instead of copying tokens, they copy *concepts*.

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

So Sheridan scrutinized copying mechanisms in LLMs and found a SECOND route.

Yes, the token induction of Elhage and Olsson is there.

But there is *another* route where the copying is done in a different way. It shows up it in attention heads that do 2-ahead copying.
bsky.app/profile/sfe...

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Sherdian's erasure is Bad News for induction heads.

Induction heads are how transformers copy text: they find earlier tokens in identical contexts. (Elhage 2021, Olsson 2022 arxiv.org/abs/2209.11895)

But when that context "what token came before" is erased, how could induction possibly work?

27.09.2025 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@davidbau is following 20 prominent accounts