Tal Linzen's Avatar

Tal Linzen

@tallinzen.bsky.social

NYU professor, Google research scientist. Good at LaTeX.

2,936 Followers  |  79 Following  |  20 Posts  |  Joined: 10.11.2023  |  2.2782

Latest posts by tallinzen.bsky.social on Bluesky

Really big announcement! See @wtimkey.bsky.social's thread for the details on an exciting new preprint from the NYU-UMass Syntactic Ambiguity Processing group. It is the culmination of the team's research efforts over these last couple of years, and we're really happy with it.

14.11.2025 19:53 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

New Preprint: osf.io/eq2ra

Reading feels effortless, but it's actually quite complex under the hood. Most words are easy to process, but some words make us reread or linger. It turns out that LLMs can tell us about why, but only in certain cases... (1/n)

14.11.2025 19:18 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 2    πŸ“Œ 1
Post image

New NeurIPS paper! Why do LMs represent concepts linearly? We focus on LMs's tendency to linearly separate true and false assertions, and provide an analysis of the truth circuit in a toy model. A joint work with Gilad Yehudai, @tallinzen.bsky.social, Joan Bruna and @albertobietti.bsky.social.

24.10.2025 15:19 β€” πŸ‘ 25    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!

15.10.2025 10:53 β€” πŸ‘ 43    πŸ” 16    πŸ’¬ 1    πŸ“Œ 3
Preview
To model human linguistic prediction, make LLMs less superhuman When people listen to or read a sentence, they actively make predictions about upcoming words: words that are less predictable are generally read more slowly than predictable ones. The success of larg...

Another banger from @tallinzen.bsky.social .

Also fits with some of the criticisms of Centaur and my faculty-based approach generally; if you want LLMs to model human cognition, give them more architecture akin to human faculty psychology like long and short-term memory.

arxiv.org/abs/2510.05141

15.10.2025 13:57 β€” πŸ‘ 23    πŸ” 6    πŸ’¬ 1    πŸ“Œ 1

thanks Cameron!

15.10.2025 13:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
LLMs Switch to Guesswork Once Instructions Get Long LLMs abandon reasoning for guesswork when instructions get long, new work from Linguistics PhD student Jackson Petty & CDS shows.

Linguistics PhD student @jacksonpetty.org finds LLMs "quiet-quit" when instructions get long, switching from reasoning to guesswork.

With CDS' @tallinzen.bsky.social, @shauli.bsky.social, @lambdaviking.bsky.social, @michahu.bsky.social, and Wentao Wang.

nyudatascience.medium.com/llms-switch-...

10.09.2025 15:26 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Nature: US senators poised to reject Trump’s proposed massive science cuts

Committee gives first hint that policymakers might preserve, rather than slash, funding for US National Science Foundation and other agencies.

Nature: US senators poised to reject Trump’s proposed massive science cuts Committee gives first hint that policymakers might preserve, rather than slash, funding for US National Science Foundation and other agencies.

DO NOT GIVE UP!

Our advocacy is working.

A key Senate committee has indicated that it will reject Trump’s proposed cuts to science agencies including NASA and the NSF.

Keep speaking up and calling your electeds πŸ—£οΈπŸ—£οΈπŸ—£οΈ

11.07.2025 19:03 β€” πŸ‘ 1347    πŸ” 445    πŸ’¬ 8    πŸ“Œ 23

Maybe five years with a no-cost extension!

11.07.2025 21:30 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Brian Dillon Receives NSF Grant to Explore AI and Human Language Processing : College of Humanities & Fine Arts : UMass Amherst Linguist Brian Dillon receives NSF grant to investigate how AI and humans differ in interpreting meaning during language comprehension.

Congratulations to @linguistbrian.bsky.social for receiving this grant to study how to constrain language models to read complex sentences more like humans, and congratulations to me for getting to collaborate with him for another four years! www.umass.edu/humanities-a...

11.07.2025 21:30 β€” πŸ‘ 18    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

Thanks Andrea!

02.07.2025 18:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If we have a lot of shared followers, perhaps you could comment on the pinned tweet on my account and provide context?! Thank you!

02.07.2025 16:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My Twitter account has been hacked :( Please don't click on any links "I" posted on that account recently!

02.07.2025 16:20 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I'll be accepting applications for a while, and will also consider people with a late start date. Feel free to email if you have questions. No need for a formal cover letter.

21.06.2025 15:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The goal is to model some cool behavioral and neural data from humans (some to be collected) but we expect to do a lot of fundamental modeling and interpretability work. You don't need to have existing experience in cognitive science but you should be interested in learning more about it.

21.06.2025 15:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
NYU LLM + cognitive science post-doc interest form Tal Linzen's group at NYU is hiring a post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and int...

I'm hiring at least one post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and interpretability-style steering. Express interest here: docs.google.com/forms/d/e/1F...

21.06.2025 15:13 β€” πŸ‘ 42    πŸ” 22    πŸ’¬ 2    πŸ“Œ 1

דווקא Χ‘Χ‘Χ™Χ§Χ•Χ¨Χͺ הדרכונים Χ‘Χ¦Χ“ Χ”Χ™Χ©Χ¨ΧΧœΧ™ של Χ”ΧžΧ’Χ‘Χ¨ לא Χ‘Χ™Χ§Χ©Χ• Χ”Χ•Χ›Χ—Χ” Χ©Χ™Χ© ΧœΧ™ גם Χ“Χ¨Χ›Χ•ΧŸ Χ–Χ¨, Χ ΧͺΧ Χ• ΧœΧ™ לצאΧͺ ΧžΧ”ΧΧ¨Χ₯ Χ‘ΧœΧ™ Χ‘Χ’Χ™Χ™Χ”. מבΧͺΧ‘Χ¨ Χ©Χ™Χ© Χ‘Χ™Χ›Χ•ΧŸ Χ‘Χ˜Χ—Χ•Χ Χ™ ביציאה ΧžΧ”ΧΧ¨Χ₯ של אזרחים Χ™Χ©Χ¨ΧΧœΧ™Χ Χ¨Χ§ Χ‘Χ“Χ¨Χš האוויר, לא Χ‘Χ“Χ¨Χš Χ”Χ™Χ‘Χ©Χ”

20.06.2025 10:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

How well can LLMs understand tasks with complex sets of instructions? We investigate through the lens of RELIC: REcognizing (formal) Languages In-Context, finding a significant overhang between what LLMs are able to do theoretically and how well they put this into practice.

09.06.2025 18:02 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Following the success story of BabyBERTa, I & many other NLPers have turned to language acquisition for inspiration. In this new paper we show that using Child-Directed Language as training data is unfortunately *not* beneficial for syntax learning, at least not in the traditional LM training regime

30.05.2025 20:45 β€” πŸ‘ 24    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0

Depends on what you mean by US academics, I guess. A lot of people are here for a temporary position, don't have strong ties to the country, and were mentally prepared to move elsewhere anyway. Those people are much more likely to leave than before.

24.05.2025 23:33 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'll have a bit of time to chat with folks in Berlin and/or Copenhagen about AI, LLMs, cognitive science, how good your bike infrastructure is, etc, let me know!

23.05.2025 13:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

And this one on language models with cognitively plausible memory in Potsdam on Tuesday (as part of this in-person-only sentence processing workshop vasishth.github.io/sentproc-wor...):

23.05.2025 13:44 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Cross-posting the abstracts for two talks I'm giving next week! This one on formal languages for LLM pretraining and evaluation, at Apple ML Research in Copenhagen on Wednesday

23.05.2025 13:44 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Updated version of our position piece on how language models can help us understand how people learn and process language, on why it's crucial to train models on cognitive plausible datasets, and on the BabyLM project that addresses this issue.

12.05.2025 16:14 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

out of date, should be $300 billion now!

01.05.2025 17:38 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

thanks! I'll start with the frens and nice people and work my way up from there!

27.03.2025 19:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

At #HSP2025, I'll present work with @tallinzen.bsky.social and @shravanvasishth.bsky.social on modeling garden-pathing in a huge benchmark dataset: hsp2025.github.io/abstracts/29.... Statistically decomposing the effect into subprocesses greatly improves predictive fit over just comparing means!

14.03.2025 10:17 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

Going to give this website another shot! What are good lists of linguistics, psycholinguistics, NLP and AI accounts?

27.03.2025 19:14 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

Thanks Ted for mentioning me in the same tweet as Chris! This website really is better than the other one!

19.11.2023 05:52 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Very little happening on here but silence is certainly better than all of the boardroom drama takes on the other website. Four different people I follow just came up with the same unfunny joke about the most recent development in the drama, apparently independently?

19.11.2023 05:46 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@tallinzen is following 20 prominent accounts