New paper exploring the flexibility of control mechanisms in bilinguals is out in @cognitionjournal.bsky.social l! w/ @kalinkatimmer.bsky.social, Jakub Szewczyk, and Zofia Wodniecka.
We ask if language context can affect language control in bilinguals.
authors.elsevier.com/a/1meRF2Hx32...
👇🧵
The cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network!
authors.elsevier.com/a/1mUU83BtfH...
1/n 🧵👇
Now out in @pnas.org: a tour de force co-led by Sammy Floyd and Olessia Jouravlev (also with @moshepoliak.bsky.social, Zach Mineroff and Ted Gibson): www.pnas.org/doi/10.1073/...
New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.
This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...
Human speech is continuous, and many meaning spaces (like color) are continuous too. Yet we use discrete words like “blue” and “green” that carve these spaces into categories.
In our new paper, we ask: How do people turn continuous spaces into structured, word-like systems for communication? (1/8)
Invited Speakers:
Rosemary Varley, U College, London
Anna Papafragou, U Pennsylvania
Gary Lupyan, U Wisconsin
Cristine Legare, U Texas Austin
Julian Jara-Ettinger, Yale U
Ray Jackendoff, Tufts U
Anna Ivanova, Georgia Institute of Tech
Jacob Andreas, MIT
Language and thought in minds and machines (supported by the NSF)
The relationship between language and thought has been debated and researched across diverse disciplines, including linguistics, philosophy, cognitive science/psychology, neuroscience, and AI.
39th Annual Conference on Human Sentence Processing
March 26-28, 2026
hsp2026.org
MIT, Cambridge MA, USA.
email: info@hsp2026.org
Special session: Language and thought in minds and machines
Submission deadline: December 12 2025
(Real deadline; no extension)
Curtis Chen initiated this as a class project, but he had sadly passed away before the work was completed. He was a brilliant and kind young scientist with a wonderful, dry sense of humor, and he had a bright future ahead of him. His passing is a great tragedy. He will be missed.
THANK YOU SO MUCH to my incredible advisor and the senior author of this work, Ted Gibson. Your insight and patience strongly shaped this paper and continue shaping my work and development.
(11) Bonus from the appendix 💰💰💰
Prompted by reviewers, we investigated the effect of label choice in binary acceptability rating (acceptable, good, grammatical, natural), and found that participants’ behavior is invariant to the specific labels that the study uses.
(10) Moreover, word and construction probabilities are valid constructs across languages and are fairly easy quantities to compute, facilitating cross-linguistic research. Thus, construction probability is a powerful and potentially highly general tool for explaining acceptability.
(9) The current work merges insights on language processing from construction-based and probability-based approaches: participants are sensitive to the probabilities of words, sequences of words, and even argument structure, which is a more abstract quantity.
(8) We then replaced verb frames with adjective frames (e.g., “What was Mary glad that John bought?”), finding that both P(adjective) and P(that | adjective) predicted acceptability ratings in a similar way. Thus, a broader range of constructions is governed by word and construction probabilities.
(7) We conducted a replication of LR, adding fillers, removing catch trials, and making slight changes in the critical materials, and arrived at the same results. We learn that these effects are robust and replicable, despite the changes in materials and study platform (MTurk -> Prolific).
(6) P(verb) and P(that | verb) were largely independent; this justified using them and their interaction in an alternative model to that of LR, which only used the bigram probability of ‘{verb} that’. Both parameters and their interaction were significant and positive!
(5) We first extracted the probabilities of the verbs, P(verb), from the Corpus of Contemporary American English, and syntactically parsed the sentences therein to extract the argument structure of each verb, finding the probability with which each verb takes a sentence complement, P(that | verb).
(4) In the current work, we evaluate the contribution of all 3 quantities using LR's original data, a modified replication experiment, and an extension to a new construction. We show how the alternative, simpler (but also probability-based) approach accounts for the data better.
(3) They proposed that acceptability depends on the probability of the verb-frame of the intermediate verb (e.g., “whine that”). But why start with the bigram “whine that” and not the probabilities that compose it: the lexeme probability P(whine) and argument structure probability P(that | whine)?
(2) The factors that affect the acceptability of long-distance extractions have long been debated, with multiple accounts proposed. Liu, Ryskin et al. (2022; LR) proposed a succinct probability-based account of a subtype of such sentences (e.g., “What did Mary whine that John bought?”).
(1)💡NEW PUBLICATION💡
Word and construction probabilities explain the acceptability of certain long-distance dependency structures
Work with Curtis Chen and Ted Gibson
Link to paper: tedlab.mit.edu/tedlab_websi...
In memory of Curtis Chen.
New paper! 🧠 **The cerebellar components of the human language network**
with: @hsmall.bsky.social @moshepoliak.bsky.social @gretatuckute.bsky.social @benlipkin.bsky.social @awolna.bsky.social @aniladmello.bsky.social and @evfedorenko.bsky.social
www.biorxiv.org/content/10.1...
1/n 🧵