Our paper "Prediction Hubs are Context-Informed Frequent Tokens in LLMs" has been accepted at ACL 2025!
Main points:
1. Hubness is not a problem when language models do next-token prediction.
2. Nuisance hubness can appear when other comparisons are made.
07.07.2025 10:48 โ ๐ 7 ๐ 1 ๐ฌ 1 ๐ 2
Interpretability Techniques for Speech Models โ Tutorial @ Interspeech 2025
The @interspeech.bsky.social early registration deadline is coming up in a few days!
Want to learn how to analyze the inner workings of speech processing models? ๐ Check out the programme for our tutorial:
interpretingdl.github.io/speech-inter... & sign up through the conference registration form!
13.06.2025 05:18 โ ๐ 28 ๐ 10 ๐ฌ 1 ๐ 2
Last day to sign up for the COLT Symposium!
Register: tinyurl.com/colt-register
๐ข ๐๐ผ๐ฐ๐ฎ๐๐ถ๐ผ๐ป ๐ฐ๐ต๐ฎ๐ป๐ด๐ฒ๐ข
June 2nd, 14:30 - 19:00
UPF Campus de la Ciutadella
Room 40.101
maps.app.goo.gl/1216LJRsWmTE...
26.05.2025 10:44 โ ๐ 5 ๐ 1 ๐ฌ 0 ๐ 1
โญ Registration open til May 27th! โญ
Website: www.upf.edu/web/colt/sym...
June 2nd, UPF
๐ฆ๐ฝ๐ฒ๐ฎ๐ธ๐ฒ๐ฟ ๐น๐ถ๐ป๐ฒ๐๐ฝ:
Arianna Bisazza (language acquisition with NNs)
Naomi Saphra (emergence in LLM training dynamics)
Jean-Rรฉmi King (TBD)
Louise McNally (pitfalls of contextual/formal accounts of semantics)
20.05.2025 08:13 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 2
Unique Hard Attention: A Tale of Two Sides
Understanding the expressive power of transformers has recently attracted attention, as it offers insights into their abilities and limitations. Many studies analyze unique hard attention transformers...
๐งต Excited to share our paper "Unique Hard Attention: A Tale of Two Sides" with Selim, Jiaoda, and Ryan, where we show that the way transformers break ties in attention scores has profound implications on their expressivity! And it got accepted to ACL! :)
The paper: arxiv.org/abs/2503.14615
17.05.2025 14:28 โ ๐ 2 ๐ 1 ๐ฌ 1 ๐ 0
Announcing the COLT Symposium on June 2nd!
๐๐บ๐ฒ๐ฟ๐ด๐ฒ๐ป๐ ๐ณ๐ฒ๐ฎ๐๐๐ฟ๐ฒ๐ ๐ผ๐ณ ๐น๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ถ๐ป ๐บ๐ถ๐ป๐ฑ๐ ๐ฎ๐ป๐ฑ ๐บ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ๐
What properties of language are emerging from work in experimental and theoretical linguistics, neuroscience & LLM interpretability?
Info: tinyurl.com/colt-site
Register: tinyurl.com/colt-register
๐งต1/3
13.05.2025 09:00 โ ๐ 4 ๐ 2 ๐ฌ 1 ๐ 2
๐๐ฃ๐ฅณ
I could not be more excited for this to be out!
With a fully automated pipeline based on Universal Dependencies, 43 non-Indoeuropean languages, and the best LLMs only scoring 90.2%, I hope this will be a challenging and interesting benchmark for multilingual NLP.
Go test your language models!
07.04.2025 15:03 โ ๐ 13 ๐ 1 ๐ฌ 0 ๐ 0
Scatterplot titled โEmpirical Evidence of Ideological Targeting in Federal Layoffs: Agencies seen as liberal are significantly more likely to face DOGE layoffs.โ
โข The x-axis represents Perceived Ideological Leaning of federal agencies, ranging from -2 (Most Liberal) to +2 (Most Conservative), based on survey responses from over 1,500 federal executives.
โข The y-axis shows Agency Size (Number of Staff) on a logarithmic scale from 1,000 to 1,000,000.
Each point represents a federal agency:
โข Red dots indicate agencies that experienced DOGE layoffs.
โข Gray dots indicate agencies with no layoffs.
Key Observations:
โข Liberal-leaning agencies (left side of the plot) are disproportionately represented among red dots, indicating higher layoff rates.
โข Notable targeted agencies include:
โข HHS (Health & Human Services)
โข EPA (Environmental Protection Agency)
โข NIH (National Institutes of Health)
โข CFPB (Consumer Financial Protection Bureau)
โข Dept. of Education
โข USAID (U.S. Agency for International Development)
โข The National Nuclear Security Administration (DOE), despite its conservative leaning (+1 on the scale), is an exception among targeted agencies.
โข A notable outlier: the Department of Veterans Affairs (moderately conservative) also faced layoffs despite its size.
Takeaway:
The figure visually demonstrates that DOGE layoffs disproportionately targeted liberal-leaning agencies, supporting claims of ideological bias. The pattern reveals that layoffs were not driven by agency size or budget alone but were strongly associated with perceived ideology.
Source: Richardson, Clinton, & Lewis (2018). Elite Perceptions of Agency Ideology and Workforce Skill. The Journal of Politics, 80(1).
The DOGE firings have nothing to do with โefficiencyโ or โcutting waste.โ Theyโre a direct push to weaken federal agencies perceived as liberal. This was evident from the start, and now the data confirms it: targeted agencies overwhelmingly those seen as more left-leaning. ๐งตโฌ๏ธ
20.02.2025 02:18 โ ๐ 10805 ๐ 4872 ๐ฌ 258 ๐ 404
list of banned keywords
๐จBREAKING. From a program officer at the National Science Foundation, a list of keywords that can cause a grant to be pulled. I will be sharing screenshots of these keywords along with a decision tree. Please share widely. This is a crisis for academic freedom & science.
04.02.2025 01:26 โ ๐ 28149 ๐ 15957 ๐ฌ 1296 ๐ 3736
3๏ธโฃLLMs that are better at next-token prediction have higher, earlier ID peaks.
5/6
02.02.2025 18:46 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
2๏ธโฃ The ID peak (beige) is where different LLMs are most similar (big shapes).
All LLMs share this high-dimensional phase of linguistic abstraction, but...
4/6
02.02.2025 18:46 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
... the ID peak marks where syntactic, semantic, and abstract linguistic features like toxicity and sentiment are first decodable.
โญuse these layers for downstream transfer!
(e.g., for brain encoding models, see arxiv.org/abs/2409.05771)
3/6
02.02.2025 18:46 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
1๏ธโฃ The ID peak is linguistically relevant.
- it collapses on shuffled text (destroying syntactic/semantic structure)
- it grows over the course of training...
2/6
02.02.2025 18:46 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Here's our work accepted to #ICLR2025!
We look at how intrinsic dimension evolves over LLM layers, spotting a universal high-dimensional phase.
This ID peak is where:
- linguistic features are built
- different LLMs are most similar,
with implications for task transfer
๐งต 1/6
02.02.2025 18:46 โ ๐ 11 ๐ 2 ๐ฌ 1 ๐ 1
White House pauses all federal grants, sparking confusion
The Trump administration has put a hold on all federal financial grants and loans, affecting tens of billions of dollars in payments.
I think some people hear โgrantsโ and think that without them, scientists and government workers just have less stuff to play with at work. But grants fund salaries for students, academics, researchers, and people who work in all areas of public service.
โPausingโ grants means people donโt eat.
28.01.2025 03:03 โ ๐ 43909 ๐ 14564 ๐ฌ 1621 ๐ 966
๐New EMNLP paper from Eleonora Gualdoni & @gboleda.bsky.social !
Why do objects have many names?
Human lexicons contain different words that speakers can use to refer to the same object, e.g., purple or magenta for the same color.
We investigate using tools from efficient coding...๐งต
1/3
02.12.2024 10:38 โ ๐ 27 ๐ 7 ๐ฌ 1 ๐ 0
โกPostdoc opportunity w/ COLT
Beatriu de Pinรณs contract, 3 yrs, competitive call by Catalan government.
Apply with a PI (Marco Gemma or Thomas)
Reqs: min 2y postdoc experience outside Spain, not having lived in Spain for >12 months in the last 3y.
Application ~December-February (exact dates TBD)
25.11.2024 09:51 โ ๐ 6 ๐ 2 ๐ฌ 0 ๐ 0
Hello๐! We're a computational linguistics group in Barcelona headed by Gemma Boleda, Marco Baroni & Thomas Brochhagen
We do psycholinguistics, cogsci, language evolution & NLP, with diverse backgrounds in philosophy, formal linguistics, CS & physics
Get in touch for postdoc, PhD & MS openings!
25.11.2024 10:17 โ ๐ 14 ๐ 1 ๐ฌ 0 ๐ 0
My lab has been working on comparing neural representations for the past few years - methods like RSA, CKA, CCA, Procrustes distance
We are often asked: What do these things tell us about the system's function? How do they relate to decoding?
Our new paper has some answers arxiv.org/abs/2411.08197
18.11.2024 18:17 โ ๐ 216 ๐ 70 ๐ฌ 6 ๐ 0
PhD candidate @Technion | NLP
doing a phd in RL/online learning on questions related to exploration and adaptivity
> https://antoine-moulin.github.io/
PhD at EPFL ๐ง ๐ป
Ex @MetaAI, @SonyAI, @Microsoft
Egyptian ๐ช๐ฌ
Sentence processing modeling | Computational psycholinguistics | 1st year PhD student at LLF, CNRS, Universitรฉ Paris Citรฉ | Currently visiting COLT, Universitat Pompeu Fabra, Barcelona, Spain
https://ninanusb.github.io/
The largest workshop on analysing and interpreting neural networks for NLP.
BlackboxNLP will be held at EMNLP 2025 in Suzhou, China
blackboxnlp.github.io
PhD student at AIML Lab TU Darmstadt
Interested in concept learning, neuro-symbolic AI and program synthesis
Postdoc @rug.nl with Arianna Bisazza.
Interested in NLP, interpretability, syntax, language acquisition and typology.
Language, its evolution, diversity and biological foundations.
ICREA & University of Barcelona.
Multimodal Communication and Learning in Social Interactions (CoCoDev team). Associate Professor of Computer/Cognitive Science at Aix-Marseille University.
afourtassi.github.io
Cognitive scientist, philosopher, and psychologist at Berkeley, author of The Scientist in the Crib, The Philosophical Baby and The Gardener and the Carpenter and grandmother of six.
Natural and artificial general intelligence.
https://marcelbinz.github.io/
Professor of Applied Physics at Stanford | Venture Partner a16z | Research in AI, Neuroscience, Physics
Interested in cognition and artificial intelligence. Research Scientist at Google DeepMind. Previously cognitive science at Stanford. Posts are mine.
lampinen.github.io
Professor of Linguistics and Psychology, New York University
linguist, experimental work on meaning (lexical semantics), language use, representation, learning, constructionist usage-based approach, Princeton U https://adele.scholar.princeton.edu/publications/topic
Studying language in biological brains and artificial ones @MIT.
www.tuckute.com
Doing maths research (geom. analysis, metric geometry, large point configs., opt. transport, etc) since ~2010, deep learning and applied research since ~2020.
https://sites.google.com/site/mircpetrache/home
Postdoc at MIT BCS, interested in language(s) in humans and LMs
Postdoc at Utrecht University, previously PhD candidate at the University of Amsterdam
Multimodal NLP, Vision and Language, Cognitively Inspired NLP
https://ecekt.github.io/