excited that the Society for Computation in Linguistics (SCiL) will be colocated with #acl2026nlp this year, and I'm grateful to the National Science Foundation for helping support SCiL presenters' registration costs!
(keynotes: Jenn Hu and Noah Smith
deadline: Jan 30
conference: July 3 & 4)
09.12.2025 00:01 β
π 5
π 0
π¬ 0
π 0
Really big announcement! See @wtimkey.bsky.social's thread for the details on an exciting new preprint from the NYU-UMass Syntactic Ambiguity Processing group. It is the culmination of the team's research efforts over these last couple of years, and we're really happy with it.
14.11.2025 19:53 β
π 13
π 3
π¬ 1
π 0
New Preprint: osf.io/eq2ra
Reading feels effortless, but it's actually quite complex under the hood. Most words are easy to process, but some words make us reread or linger. It turns out that LLMs can tell us about why, but only in certain cases... (1/n)
14.11.2025 19:18 β
π 12
π 5
π¬ 2
π 1
New NeurIPS paper! Why do LMs represent concepts linearly? We focus on LMs's tendency to linearly separate true and false assertions, and provide an analysis of the truth circuit in a toy model. A joint work with Gilad Yehudai, @tallinzen.bsky.social, Joan Bruna and @albertobietti.bsky.social.
24.10.2025 15:19 β
π 25
π 5
π¬ 1
π 1
πIntroducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
15.10.2025 10:53 β
π 44
π 16
π¬ 1
π 4
To model human linguistic prediction, make LLMs less superhuman
When people listen to or read a sentence, they actively make predictions about upcoming words: words that are less predictable are generally read more slowly than predictable ones. The success of larg...
Another banger from @tallinzen.bsky.social .
Also fits with some of the criticisms of Centaur and my faculty-based approach generally; if you want LLMs to model human cognition, give them more architecture akin to human faculty psychology like long and short-term memory.
arxiv.org/abs/2510.05141
15.10.2025 13:57 β
π 23
π 6
π¬ 1
π 1
thanks Cameron!
15.10.2025 13:59 β
π 0
π 0
π¬ 0
π 0
LLMs Switch to Guesswork Once Instructions Get Long
LLMs abandon reasoning for guesswork when instructions get long, new work from Linguistics PhD student Jackson Petty & CDS shows.
Linguistics PhD student @jacksonpetty.org finds LLMs "quiet-quit" when instructions get long, switching from reasoning to guesswork.
With CDS' @tallinzen.bsky.social, @shauli.bsky.social, @lambdaviking.bsky.social, @michahu.bsky.social, and Wentao Wang.
nyudatascience.medium.com/llms-switch-...
10.09.2025 15:26 β
π 7
π 2
π¬ 0
π 0
Nature: US senators poised to reject Trumpβs proposed massive science cuts
Committee gives first hint that policymakers might preserve, rather than slash, funding for US National Science Foundation and other agencies.
DO NOT GIVE UP!
Our advocacy is working.
A key Senate committee has indicated that it will reject Trumpβs proposed cuts to science agencies including NASA and the NSF.
Keep speaking up and calling your electeds π£οΈπ£οΈπ£οΈ
11.07.2025 19:03 β
π 1340
π 442
π¬ 8
π 23
Maybe five years with a no-cost extension!
11.07.2025 21:30 β
π 3
π 0
π¬ 0
π 0
Brian Dillon Receives NSF Grant to Explore AI and Human Language Processing : College of Humanities & Fine Arts : UMass Amherst
Linguist Brian Dillon receives NSF grant to investigate how AI and humans differ in interpreting meaning during language comprehension.
Congratulations to @linguistbrian.bsky.social for receiving this grant to study how to constrain language models to read complex sentences more like humans, and congratulations to me for getting to collaborate with him for another four years! www.umass.edu/humanities-a...
11.07.2025 21:30 β
π 18
π 1
π¬ 1
π 1
Thanks Andrea!
02.07.2025 18:12 β
π 0
π 0
π¬ 0
π 0
If we have a lot of shared followers, perhaps you could comment on the pinned tweet on my account and provide context?! Thank you!
02.07.2025 16:27 β
π 1
π 0
π¬ 1
π 0
My Twitter account has been hacked :( Please don't click on any links "I" posted on that account recently!
02.07.2025 16:20 β
π 2
π 1
π¬ 1
π 0
I'll be accepting applications for a while, and will also consider people with a late start date. Feel free to email if you have questions. No need for a formal cover letter.
21.06.2025 15:13 β
π 0
π 0
π¬ 0
π 0
The goal is to model some cool behavioral and neural data from humans (some to be collected) but we expect to do a lot of fundamental modeling and interpretability work. You don't need to have existing experience in cognitive science but you should be interested in learning more about it.
21.06.2025 15:13 β
π 2
π 0
π¬ 1
π 0
NYU LLM + cognitive science post-doc interest form
Tal Linzen's group at NYU is hiring a post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and int...
I'm hiring at least one post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and interpretability-style steering. Express interest here: docs.google.com/forms/d/e/1F...
21.06.2025 15:13 β
π 42
π 21
π¬ 2
π 1
ΧΧΧΧ§Χ ΧΧΧΧ§ΧΧ¨Χͺ ΧΧΧ¨ΧΧΧ ΧΧ ΧΧ¦Χ ΧΧΧ©Χ¨ΧΧΧ Χ©Χ ΧΧΧ’ΧΧ¨ ΧΧ ΧΧΧ§Χ©Χ ΧΧΧΧΧ Χ©ΧΧ© ΧΧ ΧΧ ΧΧ¨ΧΧΧ ΧΧ¨, Χ ΧͺΧ Χ ΧΧ ΧΧ¦ΧΧͺ ΧΧΧΧ¨Χ₯ ΧΧΧ ΧΧ’ΧΧΧ. ΧΧ‘ΧͺΧΧ¨ Χ©ΧΧ© Χ‘ΧΧΧΧ ΧΧΧΧΧ Χ ΧΧΧ¦ΧΧΧ ΧΧΧΧ¨Χ₯ Χ©Χ ΧΧΧ¨ΧΧΧ ΧΧ©Χ¨ΧΧΧΧ Χ¨Χ§ ΧΧΧ¨Χ ΧΧΧΧΧΧ¨, ΧΧ ΧΧΧ¨Χ ΧΧΧΧ©Χ
20.06.2025 10:04 β
π 3
π 0
π¬ 0
π 0
How well can LLMs understand tasks with complex sets of instructions? We investigate through the lens of RELIC: REcognizing (formal) Languages In-Context, finding a significant overhang between what LLMs are able to do theoretically and how well they put this into practice.
09.06.2025 18:02 β
π 5
π 2
π¬ 1
π 0
Following the success story of BabyBERTa, I & many other NLPers have turned to language acquisition for inspiration. In this new paper we show that using Child-Directed Language as training data is unfortunately *not* beneficial for syntax learning, at least not in the traditional LM training regime
30.05.2025 20:45 β
π 24
π 6
π¬ 1
π 0
Depends on what you mean by US academics, I guess. A lot of people are here for a temporary position, don't have strong ties to the country, and were mentally prepared to move elsewhere anyway. Those people are much more likely to leave than before.
24.05.2025 23:33 β
π 7
π 0
π¬ 1
π 0
I'll have a bit of time to chat with folks in Berlin and/or Copenhagen about AI, LLMs, cognitive science, how good your bike infrastructure is, etc, let me know!
23.05.2025 13:44 β
π 1
π 0
π¬ 0
π 0
And this one on language models with cognitively plausible memory in Potsdam on Tuesday (as part of this in-person-only sentence processing workshop vasishth.github.io/sentproc-wor...):
23.05.2025 13:44 β
π 5
π 0
π¬ 1
π 0
Cross-posting the abstracts for two talks I'm giving next week! This one on formal languages for LLM pretraining and evaluation, at Apple ML Research in Copenhagen on Wednesday
23.05.2025 13:44 β
π 10
π 2
π¬ 1
π 0
Updated version of our position piece on how language models can help us understand how people learn and process language, on why it's crucial to train models on cognitive plausible datasets, and on the BabyLM project that addresses this issue.
12.05.2025 16:14 β
π 11
π 1
π¬ 0
π 0
out of date, should be $300 billion now!
01.05.2025 17:38 β
π 9
π 0
π¬ 0
π 0
thanks! I'll start with the frens and nice people and work my way up from there!
27.03.2025 19:41 β
π 2
π 0
π¬ 0
π 0
At #HSP2025, I'll present work with @tallinzen.bsky.social and @shravanvasishth.bsky.social on modeling garden-pathing in a huge benchmark dataset: hsp2025.github.io/abstracts/29.... Statistically decomposing the effect into subprocesses greatly improves predictive fit over just comparing means!
14.03.2025 10:17 β
π 11
π 2
π¬ 0
π 1
Going to give this website another shot! What are good lists of linguistics, psycholinguistics, NLP and AI accounts?
27.03.2025 19:14 β
π 15
π 0
π¬ 4
π 0
Thanks Ted for mentioning me in the same tweet as Chris! This website really is better than the other one!
19.11.2023 05:52 β
π 4
π 0
π¬ 0
π 0