Tiago Pimentel's Avatar

Tiago Pimentel

@tpimentel.bsky.social

Postdoc at ETH. Formerly, PhD student at the University of Cambridge :)

2,212 Followers  |  126 Following  |  32 Posts  |  Joined: 08.12.2023  |  2.6834

Latest posts by tpimentel.bsky.social on Bluesky

Preview
Theory of XAI Workshop Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...

Interested in provable guarantees and fundamental limitations of XAI? Join us at the "Theory of Explainable AI" workshop Dec 2 in Copenhagen! @ellis.eu @euripsconf.bsky.social

Speakers: @jessicahullman.bsky.social @doloresromerom.bsky.social @tpimentel.bsky.social

Call for Contributions: Oct 15

07.10.2025 12:53 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 0    πŸ“Œ 2
Paper title: Language models align with brain regions that represent concepts across modalities.
Authors:  Maria Ryskina, Greta Tuckute, Alexander Fung, Ashley Malkin, Evelina Fedorenko. 
Affiliations: Maria is affiliated with the Vector Institute for AI, but the work was done at MIT. All other authors are affiliated with MIT. 
Email address: maria.ryskina@vectorinstitute.ai.

Paper title: Language models align with brain regions that represent concepts across modalities. Authors: Maria Ryskina, Greta Tuckute, Alexander Fung, Ashley Malkin, Evelina Fedorenko. Affiliations: Maria is affiliated with the Vector Institute for AI, but the work was done at MIT. All other authors are affiliated with MIT. Email address: maria.ryskina@vectorinstitute.ai.

Interested in language models, brains, and concepts? Check out our COLM 2025 πŸ”¦ Spotlight paper!

(And if you’re at COLM, come hear about it on Tuesday – sessions Spotlight 2 & Poster 2)!

04.10.2025 02:15 β€” πŸ‘ 26    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1

Accepted to EMNLP (and more to come πŸ‘€)! The camera ready version is now online---very happy with how this turned out

arxiv.org/abs/2507.01234

24.09.2025 15:21 β€” πŸ‘ 13    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
Convergence and Divergence of Language Models under Different Random Seeds In this paper, we investigate the convergence of language models (LMs) trained under different random seeds, measuring convergence as the expected per-token Kullback--Leibler (KL) divergence across se...

This project was done with Finlay and
@kmahowald.bsky.social, and it is the outcome of Finlay's Bachelor's thesis! Catch him presenting it in #EMNLP2025 :)

Paper: arxiv.org/abs/2509.26643
Code: github.com/Tr1ple-F/con...

01.10.2025 18:08 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

See our paper for more: we have analyses on other models, downstream tasks, and considering only subsets of tokens (e.g., only tokens with a certain part-of-speech)!

01.10.2025 18:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This means that: (1) LMs can get less similar to each other, even while they all get closer to the true distribution; and (2) larger models reconverge faster, while small ones may never reconverge.

01.10.2025 18:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

* A sharp-divergence phase, where models diverge as they start using context.
* A slow-reconvergence phase, where predictions slowly become more similar again (especially in larger models).

01.10.2025 18:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Surprisingly, convergence isn’t monotonic. Instead, we find four convergence phases across model training.
* A uniform phase, where all seeds output nearly-uniform distributions.
* A sharp-convergence phase, where models align, largely due to unigram frequency learning.

01.10.2025 18:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this paper, we define convergence as the similarity between outputs of LMs trained under different seeds, where similarity is measured as a per-token KL divergence. This lets us track whether models trained under identical settings, but different seeds, behave the same.

01.10.2025 18:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Figure showing the four phases of convergence in LM training

Figure showing the four phases of convergence in LM training

LLMs are trained to mimic a β€œtrue” distributionβ€”their reducing cross-entropy then confirms they get closer to this target while training. Do similar models approach this target distribution in similar ways, though? πŸ€” Not really! Our new paper studies this, finding 4-convergence phases in training 🧡

01.10.2025 18:08 β€” πŸ‘ 24    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

Very happy this paper got accepted to NeurIPS 2025 as a Spotlight! 😁

Main takeaway: In mechanistic interpretability, we need assumptions about how DNNs encode concepts in their representations (eg, the linear representation hypothesis). Without them, we can claim any DNN implements any algorithm!

01.10.2025 15:00 β€” πŸ‘ 25    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Honoured to receive two (!!) SAC highlights awards at #ACL2025 😁 (Conveniently placed on the same slide!)
With the amazing: @philipwitti.bsky.social, @gregorbachmann.bsky.social and @wegotlieb.bsky.social,
@cuiding.bsky.social, Giovanni Acampa, @alexwarstadt.bsky.social, @tamaregev.bsky.social

31.07.2025 07:41 β€” πŸ‘ 22    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

We are presenting this paper at #ACL2025 😁 Find us at poster session 4 (Wednesday morning, 11h~12h30) to learn more about tokenisation bias!

27.07.2025 11:59 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

@philipwitti.bsky.social will be presenting our paper "Tokenisation is NP-Complete" at #ACL2025 😁 Come to the language modelling 2 session (Wednesday morning, 9h~10h30) to learn more about how challenging tokenisation can be!

27.07.2025 09:41 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Headed to Vienna for #ACL2025 to present our tokenisation bias paper and co-organise the L2M2 workshop on memorisation in language models. Reach out to chat about tokenisation, memorisation, and all things pre-training (esp. data-related topics)!

27.07.2025 06:40 β€” πŸ‘ 19    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
Post image

Causal Abstraction, the theory behind DAS, tests if a network realizes a given algorithm. We show (w/ @denissutter.bsky.social, T. Hofmann, @tpimentel.bsky.social ) that the theory collapses without the linear representation hypothesisβ€”a problem we call the non-linear representation dilemma.

17.07.2025 10:57 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Importantly, despite these results, we still believe causal abstraction is one of the best frameworks available for mech interpretability. Going forward, we should try to better understand how it is impacted by assumptions about how DNNs encode information. Longer🧡soon by @denissutter.bsky.social

14.07.2025 12:15 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Overall, our results show that causal abstraction (and interventions) is not a silver bullet, as it relies on assumptions about how features are encoded in the DNNs. We then connect our results to the linear representation hypothesis and to older debates in the probing literature.

14.07.2025 12:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We showβ€”both theoretically (under reasonable assumptions) and empirically (on real-world models)β€”that, if we allow variables to be encoded in arbitrarily complex subspaces of the DNN’s representations, any algorithm can be mapped to any model.

14.07.2025 12:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Causal abstraction identifies this correspondence by finding subspaces in the DNN's hidden states which encode the algorithm’s hidden variables. Given such a map, we say the DNN implements the algorithm if the two behave identically under interventions.

14.07.2025 12:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability? The concept of causal abstraction got recently popularised to demystify the opaque decision-making processes of machine learning models; in short, a neural network can be abstracted as a higher-level ...

In this new paper, w/ @denissutter.bsky.social , @jkminder.bsky.social, and T.Hofmann, we study *causal abstraction*, a formal specification of when a deep neural network (DNN) implements an algorithm. This is the framework behind, e.g., distributed alignment search.

Paper: arxiv.org/abs/2507.08802

14.07.2025 12:15 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Paper title "The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?" with the paper's graphical abstract showing how more powerful alignment maps between a DNN and an algorithm allow more complex features to be found and more "accurate" abstractions.

Paper title "The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?" with the paper's graphical abstract showing how more powerful alignment maps between a DNN and an algorithm allow more complex features to be found and more "accurate" abstractions.

Mechanistic interpretability often relies on *interventions* to study how DNNs work. Are these interventions enough to guarantee the features we find are not spurious? No!⚠️ In our new paper, we show many mech int methods implicitly rely on the linear representation hypothesis🧡

14.07.2025 12:15 β€” πŸ‘ 65    πŸ” 12    πŸ’¬ 1    πŸ“Œ 1
Post image

All modern LLMs run on top of a tokeniser, an often overlooked β€œpreprocessing detail”. But what if that tokeniser systematically affects model behaviour? We call this tokenisation bias.

Let’s talk about it and why it mattersπŸ‘‡
@aclmeeting.bsky.social #ACL2025 #NLProc

05.06.2025 10:43 β€” πŸ‘ 62    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

The word "laundry" contains both steps of the laundry process:
1. Undry
2. Dry

04.06.2025 19:14 β€” πŸ‘ 26    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Love this! Especially the explicit operationalization of what β€œbias” they are measuring via specifying the relevant counterfactual.
Definitely an approach that more papers talking about effects can incorporate to better clarify what the phenomenon they are studying.

04.06.2025 15:55 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

If you use LLMs, tokenisation bias probably affects you:
* Text generation: tokenisation bias β‡’ length bias 🀯
* Psycholinguistics: tokenisation bias β‡’ systematically biased surprisal estimates 🫠
* Interpretability: tokenisation bias β‡’ biased logits πŸ€”

04.06.2025 14:55 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Preview
Causal Estimation of Tokenisation Bias Modern language models are typically trained over subword sequences, but ultimately define probabilities over character-strings. Ideally, the choice of the tokeniser -- which maps character-strings to...

Led by @pietrolesci.bsky.social and with Clara Meister, Thomas Hofmann, @andreasvlachos.bsky.social :)

Paper: arxiv.org/abs/2506.03149
Code: github.com/pietrolesci/...

04.06.2025 10:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Title of paper "Causal Estimation of Tokenisation Bias" and schematic of how we define tokenisation bias, which is the causal effect we are interested in.

Title of paper "Causal Estimation of Tokenisation Bias" and schematic of how we define tokenisation bias, which is the causal effect we are interested in.

A string may get 17 times less probability if tokenised as two symbols (e.g., ⟨he, llo⟩) than as one (e.g., ⟨hello⟩)β€”by an LM trained from scratch in each situation! Our new ACL paper proposes an observational method to estimate this causal effect! Longer thread soon!

04.06.2025 10:51 β€” πŸ‘ 53    πŸ” 9    πŸ’¬ 1    πŸ“Œ 3

I think it's a reasonable change, and it doesn't change the template style, so I'd say yes. There is also already the command `\citep*` to cite all authors in a paper, so citing only the first two should also be ok? I created a pull request this morning to add it to the official template :)

29.05.2025 14:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I created a pull request earlier today. So hopefully they will approve and merge it soon-ish? :)

29.05.2025 14:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@tpimentel is following 20 prominent accounts