Harry Thasarathan's Avatar

Harry Thasarathan

@hthasarathan.bsky.social

PhD student @YorkUniversity @LassondeSchool, I work on computer vision and interpretability.

46 Followers  |  61 Following  |  12 Posts  |  Joined: 07.02.2025  |  1.6282

Latest posts by hthasarathan.bsky.social on Bluesky

Preview
Universal Sparse Autoencoders Interpretable cross-model concept alignment using sparse autoencoders.

with Julian Forsyth, @thomasfel.bsky.social, @matthewkowal.bsky.social, @csprofkgd.bsky.social

Demo: yorkucvil.github.io/UniversalSAE/

15.07.2025 02:36 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐ŸŒŒ๐Ÿ›ฐ๏ธ๐Ÿ”ญWant to explore universal visual features? Check out our interactive demo of concepts learned from our #ICML2025 paper "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment".

Come see our poster at 4pm on Tuesday in East Exhibition hall A-B, E-1208!

15.07.2025 02:36 โ€” ๐Ÿ‘ 12    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3

Our work finding universal concepts in vision models is accepted at #ICML2025!!!

My first major conference paper with my wonderful collaborators and friends @matthewkowal.bsky.social @thomasfel.bsky.social
@Julian_Forsyth
@csprofkgd.bsky.social

Working with y'all is the best ๐Ÿฅน

Preprint โฌ‡๏ธ!!

01.05.2025 22:57 โ€” ๐Ÿ‘ 15    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Accepted at #ICML2025! Check out the preprint.

HUGE shoutout to Harry (1st PhD paper, in 1st year), Julian (1st ever, done as an undergrad), Thomas and Matt!

@hthasarathan.bsky.social @thomasfel.bsky.social @matthewkowal.bsky.social

01.05.2025 15:03 โ€” ๐Ÿ‘ 35    ๐Ÿ” 7    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Check out Neehar Kondapaneni's upcoming ICLR 2025 work which proposes a new approach for understanding how two neural networks differ by discovering the shared and unique concepts learned by the networks.

Representational Similarity via Interpretable Visual Concepts
arxiv.org/abs/2503.15699

12.04.2025 07:58 โ€” ๐Ÿ‘ 22    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

A very interesting work that explores the possibility of having a unified interpretation across multiple models

09.02.2025 09:13 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This was joint work with my wonderful collaborators @Julian_Forsyth @thomasfel.bsky.social @matthewkowal.bsky.social and my supervisor @csprofkgd.bsky.social . Couldnโ€™t ask for better mentors and friends๐Ÿซถ!!!

(9/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We hope this work contributes to the growing discourse on universal representations. As the zoo of vision models increases, a canonical, interpretable concept space could be crucial for safety and understanding. Code coming soon!

(8/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Our method reveals model-specific features too: DinoV2 (left) shows specialized geometric concepts (depth, perspective), while SigLIP (right) captures unique text-aware visual concepts.

This opens new paths for understanding model differences!

(7/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Using coordinated activation maximization on universal concepts, we can visualize how each model independently represents the same concept allowing us to further explore model similarities and differences. Below are concepts visualized for DinoV2, SigLIP, and ViT.

(6/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Using co-firing and firing entropy metrics, we uncover universal features ranging from basic primitives (colors, textures) to complex abstractions (object interactions, hierarchical compositions). We find that universal concepts are important for reconstructing model activations!

(5/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

Previous approaches found universal features by post-hoc mining or similarity analysis - but this scales poorly. Our solution: extend Sparse Autoencoders to learn a shared concept space directly, encoding one model's activations and reconstructing all others from this unified vocabulary.

(4/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

If vision models seem to learn the same fundamental visual concepts, what are these universal features, and how can we find them?

(3/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Vision models (backbones & foundation models alike) seem to learn transferable features that are relevant across many tasks. Recent work even suggests we are converging towards the same "Platonic" representation of the world. (Image from arxiv.org/abs/2405.07987)

(2/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐ŸŒŒ๐Ÿ›ฐ๏ธ๐Ÿ”ญWanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"!

arxiv.org/abs/2502.03714

(1/9)

07.02.2025 15:15 โ€” ๐Ÿ‘ 56    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5

@hthasarathan is following 20 prominent accounts