Thomas Fel's Avatar

Thomas Fel

@thomasfel.bsky.social

Explainability, Computer Vision, Neuro-AI.๐Ÿชด Kempner Fellow @Harvard. Prev. PhD @Brown, @Google, @GoPro. Crรชpe lover. ๐Ÿ“ Boston | ๐Ÿ”— thomasfel.me

1,396 Followers  |  364 Following  |  36 Posts  |  Joined: 16.11.2024  |  2.085

Latest posts by thomasfel.bsky.social on Bluesky

Post image

Are you at #NeurIPS2025? Check out the #KempnerInstituteโ€™s Day 2 presentations! ๐Ÿ’ก

#AI #NeuroAI

@cpehlevan.bsky.social @kanakarajanphd.bsky.social @thomasfel.bsky.social @andykeller.bsky.social @binxuwang.bsky.social @njw.fish @yilundu.bsky.social

04.12.2025 14:02 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Into the Rabbit Hull โ€“ Part I - Kempner Institute This blog post offers an interpretability deep dive, examining the most important concepts emerging in one of todayโ€™s central vision foundation models, DINOv2. This blogpost is the first of a [โ€ฆ]

๐Ÿ‡Into the Rabbit Hull โ€” Part 1: A Deep Dive into DINOv2๐Ÿง 
Our latest Deeper Learning blog post is an #interpretability deep dive into one of todayโ€™s leading vision foundation models: DINOv2.
๐Ÿ“–Read now: bit.ly/4nNfq8D
Stay tuned โ€” Part 2 coming soon.
#AI #VLMs #DINOv2

12.11.2025 15:49 โ€” ๐Ÿ‘ 11    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The Bau lab is on fire ! ๐Ÿ˜

06.11.2025 14:13 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Interested in doing a PhD at the intersection of human and machine cognition? โœจ I'm recruiting students for Fall 2026! โœจ

Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).

Check out JHU's mentoring program (due 11/15) for help with your SoP ๐Ÿ‘‡

04.11.2025 14:44 โ€” ๐Ÿ‘ 27    ๐Ÿ” 15    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

Pleased to share new work with @sflippl.bsky.social @eberleoliver.bsky.social @thomasmcgee.bsky.social & undergrad interns at Institute for Pure and Applied Mathematics, UCLA.

Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
www.arxiv.org/pdf/2510.15987

๐Ÿงต1/n

27.10.2025 18:13 โ€” ๐Ÿ‘ 74    ๐Ÿ” 15    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿง  Thrilled to share our NeuroView with Ellie Pavlick!

"From Prediction to Understanding: Will AI Foundation Models Transform Brain Science?"

AI foundation models are coming to neuroscienceโ€”if scaling laws hold, predictive power will be unprecedented.

But is that enough?

Thread ๐Ÿงต๐Ÿ‘‡

24.10.2025 11:22 โ€” ๐Ÿ‘ 22    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Thx a lot Naomi ! ๐Ÿ™Œ๐Ÿฅน

16.10.2025 21:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This is so cool. When you look at representational geometry, it seems intuitive that models are combining convex regions of "concepts", but I wouldn't have expected that this is PROVABLY true for attention or that there was such a rich theory for this kind of geometry.

16.10.2025 18:33 โ€” ๐Ÿ‘ 33    ๐Ÿ” 5    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

That concludes this two-part descent into the Rabbit Hull.
Huge thanks to all collaborators who made this work possible โ€” and especially to @binxuwang.bsky.social , with whom this project was built, experiment after experiment.
๐ŸŽฎ kempnerinstitute.github.io/dinovision/
๐Ÿ“„ arxiv.org/pdf/2510.08638

15.10.2025 17:13 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

If this holds, three implications:
(i) Concepts = points (or regions), not directions
(ii) Probing is bounded: toward archetypes, not vectors
(iii) Can't recover generating hulls from sum: we should look deeper than just a single-layer activations to recover the true latents

15.10.2025 17:13 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Synthesizing these observations, we propose a refined view, motivated by Gรคrdenfors' theory and attention geometry.
Activations = multiple convex hulls simultaneously: a rabbit among animals, brown among colors, fluffy among textures.

The Minkowski Representation Hypothesis.

15.10.2025 17:13 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Taken together, the signs of partial density, local connectedness, and coherent dictionary atoms indicate that DINOโ€™s representations are organized beyond linear sparsity alone.

15.10.2025 17:13 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Can position explain this ?

We found that pos. information collapses: from high-rank to a near 2-dim sheet. Early layers encode precise location; later ones retain abstract axes.

This compression frees dimensions for features, and *position doesn't explain PCA map smoothness*

15.10.2025 17:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Patch embeddings form smooth, connected surfaces tracing objects and boundaries.

This may suggests interpolative geometry: tokens as mixtures between landmarks, shaped by clustering and spreading forces in the training objectives.

15.10.2025 17:13 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We found antipodal feature pairs (dแตข โ‰ˆ โˆ’ dโฑผ): vertical vs horizontal lines, white vs black shirts, left vs rightโ€ฆ

Also, co-activation statistics only moderately shape geometry: concepts that fire together aren't necessarily nearbyโ€”nor orthogonal when they don't.

15.10.2025 17:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Under the Linear Rep. Hypothesis, we'd expect Dictionary to be quasi-orthogonality.
Instead, training drives atoms from near-Grassmannian initialization to higher coherence.
Several concepts fire almost always the embedding is partly dense (!), contradicting pure sparse coding.

15.10.2025 17:13 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ•ณ๏ธ๐Ÿ‡Into the Rabbit Hull โ€“ Part II

Continuing our interpretation of DINOv2, the second part of our study concerns the *geometry of concepts* and the synthesis of our findings toward a new representational *phenomenology*:

the Minkowski Representation Hypothesis

15.10.2025 17:13 โ€” ๐Ÿ‘ 31    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

Huge thanks to all collaborators who made this work possible, and especially to @binxuwang.bsky.social. This work grew from a year of collaboration!
Tomorrow, Part II: geometry of concepts and Minkowski Representation Hypothesis.
๐Ÿ•น๏ธ kempnerinstitute.github.io/dinovision
๐Ÿ“„ arxiv.org/pdf/2510.08638

14.10.2025 21:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Curious tokens, the registers.
DINO seems to use them to encode global invariants: we find concepts (directions) that fire exclusively (!) on registers.

Example of such concepts include motion blur detector and style (game screenshots, drawings, paintings, warped images...)

14.10.2025 21:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Now for depth estimation. How does DINO know depth?

It turns out it has discovered several human-like monocular depth cues: texture gradients resembling blurring or bokeh, shadow detectors, and projective cues.

Most units mix cues, but a few remain remarkably pure.

14.10.2025 21:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Another surprise here: the most important concepts are not object-centric at all, but boundary detectors. Remarkably, these concepts coalesce into a low-dimensional subspace within (see paper).

14.10.2025 21:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

This kind of concept breaks a key assumption in interpretability: that a concept is about the tokens where it fires. Here it is the oppositeโ€”the concept is defined by where it does not fire. An open question is how models form such concepts.

14.10.2025 21:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Let's zoom in on classification.
For every class, we find two concepts: one fires on the object (e.g., "rabbit"), and another fires everywhere *except* the object -- but only when it's present!

We call them Elsewhere Concepts (credit: @davidbau.bsky.social).

14.10.2025 21:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Assuming the Linear Rep. Hypothesis, SAEs arise naturally as instruments for concept extraction, they will be our companions in this descent.
Archetypal SAE uncovered 32k concepts.

Our first observation: different tasks recruit distinct regions of this conceptual space.

14.10.2025 21:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

๐Ÿ•ณ๏ธ๐Ÿ‡ ๐™„๐™ฃ๐™ฉ๐™ค ๐™ฉ๐™๐™š ๐™๐™–๐™—๐™—๐™ž๐™ฉ ๐™ƒ๐™ช๐™ก๐™ก โ€“ ๐™‹๐™–๐™ง๐™ฉ ๐™„ (๐‘ƒ๐‘Ž๐‘Ÿ๐‘ก ๐ผ๐ผ ๐‘ก๐‘œ๐‘š๐‘œ๐‘Ÿ๐‘Ÿ๐‘œ๐‘ค)

๐—”๐—ป ๐—ถ๐—ป๐˜๐—ฒ๐—ฟ๐—ฝ๐—ฟ๐—ฒ๐˜๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† ๐—ฑ๐—ฒ๐—ฒ๐—ฝ ๐—ฑ๐—ถ๐˜ƒ๐—ฒ ๐—ถ๐—ป๐˜๐—ผ ๐——๐—œ๐—ก๐—ข๐˜ƒ๐Ÿฎ, one of visionโ€™s most important foundation models.

And today is Part I, buckle up, we're exploring some of its most charming features. :)

14.10.2025 21:00 โ€” ๐Ÿ‘ 36    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Really neat, congrats !

12.10.2025 00:59 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Superposition disentanglement of neural representations reveals hidden alignment The superposition hypothesis states that a single neuron within a population may participate in the representation of multiple features in order for the population to represent more features than the ...

Superposition has reshaped interpretability research. In our @unireps.bsky.social paper led by @andre-longon.bsky.social we show it also matters for measuring alignment! Two systems can represent the same features yet appear misaligned if those features are mixed differently across neurons.

08.10.2025 20:54 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
Explanations are a means to an end Modern methods for explainable machine learning are designed to describe how models map inputs to outputs--without deep consideration of how these explanations will be used in practice. This paper arg...

For XAI itโ€™s often thought explanations help (boundedly rational) user โ€œunlockโ€ info in features for some decision. But no one says this, they say vaguer things like โ€œsupporting trustโ€. We lay out some implicit assumptions that become clearer when you take a formal view here arxiv.org/abs/2506.22740

08.10.2025 23:12 โ€” ๐Ÿ‘ 30    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Beautiful work !

10.10.2025 00:00 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

๐ŸšจUpdated: "How far can we go with ImageNet for Text-to-Image generation?"

TL;DR: train a text2image model from scratch on ImageNet only and beat SDXL.

Paper, code, data available! Reproducible science FTW!
๐Ÿงต๐Ÿ‘‡

๐Ÿ“œ arxiv.org/abs/2502.21318
๐Ÿ’ป github.com/lucasdegeorg...
๐Ÿ’ฝ huggingface.co/arijitghosh/...

08.10.2025 20:40 โ€” ๐Ÿ‘ 43    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

@thomasfel is following 20 prominent accounts