I'm excited to share that this paper was accepted at ICLR 2026! We show that language models encode one of the most basic ingredients of a world model: the ability to distinguish plausible from implausible states. Check out the paper for more details!
See you in Rio!
Paper: arxiv.org/abs/2507.12553
I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science 📕👇
Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!
Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
Yeah exactly -- @kanishka.bsky.social in examples like yours above, if we assume that g=1 and those strings aren't likely to be ungrammatical realizations of some other messages, then diffs in p(string) will reflect diffs in p(m). Which is what we want, no?
This work was done with an amazing team: @wegotlieb.bsky.social, @siyuansong.bsky.social, @kmahowald.bsky.social, @rplevy.bsky.social
Preprint (pre-TACL version): arxiv.org/abs/2510.16227
10/10
Our work also raises new Qs. If LMs virtually always produce grammatical strings, then why is there so much overlap between the probs assigned to grammatical/ungrammatical strings?
This connects to tensions btwn language generation/identification (e.g., openreview.net/forum?id=FGT...)
9/10
An offshoot of our analysis: if you use minimal pairs that are not tightly controlled, you risk underestimating the grammatical competence of models, due to differences in underlying message probabilities. 8/10
As mentioned above, Prediction #3 shows that recent criticism about the overlap in probabilities across gram/ungram strings should NOT be interpreted as a failure of probability to tell us about grammaticality.
This overlap is to be expected if prob is influenced by factors other than gram. 7/10
We use our framework to derive 3 predictions, which we validate empirically:
1. Correlation btwn the prob of string probs within minimal pairs
2. Correlation btwn LMs’ and humans’ deltas within minimal pairs
3. Poor separation btwn prob of unpaired grammatical and ungrammatical strings
6/10
In other words, when messages aren’t controlled for, gram strings won't always be more probable than ungram strings.
This phenomenon has previously been used to argue that probability is a bad tool for measuring grammatical knowledge -- but in fact, it follows directly from our framework! 5/10
Minimal pairs are pairs of strings with the same underlying m but different values of g.
Good LMs have low P(g=0), so they prefer the grammatical string in the minimal pair.
But for non-minimal string pairs with different underlying messages, differences in P(m) can overwhelm even good LMs. 4/10
Returning to first principles:
In our framework, the probability of a string comes from two latent variables: m, the message to be conveyed; and g, whether the message is realized grammatically.
Ungrammatical strings get probability mass when g=0: the message is not realized grammatically. 3/10
Here we develop and give evidence for a formal framework that reconciles these two observations.
Our framework provides theoretical justification for the widespread practice of using *minimal pairs* to test what grammatical generalizations LMs have acquired. 2/10
New work to appear @ TACL!
Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.
Yet they often assign higher probability to ungrammatical strings than to grammatical strings.
How can both things be true? 🧵👇
It’s grad school application season, and I wanted to give some public advice.
Caveats:
-*-*-*-*
> These are my opinions, based on my experiences, they are not secret tricks or guarantees
> They are general guidelines, not meant to cover a host of idiosyncrasies and special cases
Interested in doing a PhD at the intersection of human and machine cognition? ✨ I'm recruiting students for Fall 2026! ✨
Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).
Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
New preprint!
"Non-commitment in mental imagery is distinct from perceptual inattention, and supports hierarchical scene construction"
(by Li, Hammond, & me)
link: doi.org/10.31234/osf...
-- the title's a bit of a mouthful, but the nice thing is that it's a pretty decent summary
At #COLM2025 and would love to chat all things cogsci, LMs, & interpretability 🍁🥯 I'm also recruiting!
👉 I'm presenting at two workshops (PragLM, Visions) on Fri
👉 Also check out "Language Models Fail to Introspect About Their Knowledge of Language" (presented by @siyuansong.bsky.social Tue 11-1)
Can AI models introspect? What does introspection even mean for AI?
We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.
Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
Due to popular demand, we are extending the CogInterp submission deadline again! 🗓️🥳
Submit by *8/27* (midnight AoE)
🗓️ The submission deadline for CogInterp @ NeurIPS has officially been *extended* to 8/22 (AoE)! 👇
Looking forward to seeing your submissions!
Heading to CogSci this week! ✈️
Find me giving talks on:
💬 Prod-comp asymmetry in children and LMs (Thu 7/31)
💬 How people make sense of nonsense (Sat 8/2)
📣 Also, I’m recruiting grad students + postdocs for my new lab at Hopkins! 📣
If you’re interested in language / cognition / AI, let’s chat! 😄
Join us at NeurIPS in San Diego this December for talks by experts in the field, including James McClelland, @cgpotts.bsky.social, @scychan.bsky.social, @ari-holtzman.bsky.social, @mtoneva.bsky.social, & @sydneylevine.bsky.social!
🗓️ Submit your 4-page paper (non-archival) by August 15!
4/4
We're bringing together researchers in fields such as machine learning, psychology, linguistics, and neuroscience to discuss new empirical findings + theories which help us interpret high-level cognitive abilities in deep learning models.
3/4
Deep learning models (e.g. LLMs) show impressive abilities. But what generalizations have these models acquired? What algorithms underlie model behaviors? And how do these abilities develop?
Cognitive science offers a rich body of theories and frameworks which can help answer these questions.
2/4
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣
How can we interpret the algorithms and representations underlying complex behavior in deep learning models?
🌐 coginterp.github.io/neurips2025/
1/4
Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🎉
How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach?
🌐 sites.google.com/berkeley.edu/praglm/
📅 Submit by June 23rd
Preprint link: arxiv.org/abs/2504.14107
A huge thank you to my amazing collaborators Michael Lepori (@michael-lepori.bsky.social) & Michael Franke (@meanwhileina.bsky.social)!
(12/12)
Our work also suggests a new way of using AI models to study cognition: not just as a black box mapping stimuli to outputs, but potentially also as processing models.
Excited about future work using mechanistic interpretability to make new, testable predictions about human cognition!
(11/12)