Happy to share that our paper βMixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specializationβ (aka MiCRo) has been accepted to #ICLR2026!! π
See you in Rio π§π· ποΈ
@andreadevarda.bsky.social
Postdoc at MIT BCS, interested in language(s) in humans and LMs https://andrea-de-varda.github.io/
Happy to share that our paper βMixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specializationβ (aka MiCRo) has been accepted to #ICLR2026!! π
See you in Rio π§π· ποΈ
Bridge AI and linguistics with the Computational and Theoretical Modelling of Language and Cognition (CLC) track at @cimecunitrento.bsky.social!
Apply to our MSc in Cognitive Science
First-call deadline for non-EU applicants: March 4, 2026.
βΉοΈ corsi.unitn.it/en/cognitive-science
#cimec_unitrento #AI
The last chapter of my PhD (expanded) is finally out as a preprint!
βSemantic reasoning takes place largely outside the language networkβ π§ π§
www.biorxiv.org/content/10.6...
What is semantic reasoning? Read on! π§΅π
In collaboration with @tomlamarra.bsky.social Andrea Amelio Ravelli @chiarasaponaro.bsky.social @beatricegiustolisi.bsky.social @mariannabolog.bsky.social
10.12.2025 19:28 β π 2 π 0 π¬ 0 π 0Some words sound like what they mean. In IconicITA we show that the (psycho)linguistic factors that modulate which words are most iconic are similar between English and Italian. Lots more details in the paper!
10.12.2025 19:25 β π 5 π 1 π¬ 1 π 0Great work led by Daria & Greta showing that diverse agreement types draw on shared units (even across languages)!
10.12.2025 14:43 β π 9 π 3 π¬ 0 π 0What does it mean to understand language? We argue that the brainβs core language system is limited, and that *deeply* understanding language requires EXPORTING info to other brain regions.
w/ @neuranna.bsky.social @evfedorenko.bsky.social @nancykanwisher.bsky.social
arxiv.org/abs/2511.19757
1/nπ§΅π
I'd love to watch this, is there a recording?
21.11.2025 16:11 β π 0 π 0 π¬ 1 π 0Computational psycho/neurolinguistics is lots of fun, but most studies only focus on English. If you think cross-linguistic evidence matters for understanding the language system, consider submitting an abstract to MMMM 2026!
21.11.2025 01:17 β π 2 π 0 π¬ 0 π 0Why does this alignment emerge? There are similarities in how reasoning models and humans learn: first by observing worked examples (pretraining), then by practicing with feedback (RL). In the end, just like humans, they allocate more effort to harder problems. (6/6)
19.11.2025 20:14 β π 3 π 0 π¬ 0 π 0Token count also captures differences across tasks. Avg. token count predicts avg. RT across domains (r = 0.97, left), and even item-level RTs across all tasks (r = 0.92 (!!), right). (5/6)
19.11.2025 20:14 β π 0 π 0 π¬ 1 π 0We found that the number of reasoning tokens generated by the model reliably correlates with human RTs within each task (mean r = 0.57, all ps < .001). (4/6)
19.11.2025 20:14 β π 1 π 0 π¬ 1 π 0Large reasoning models can solve many reasoning problems, but do their computations reflect how humans think?
We compared human RTs to DeepSeek-R1βs CoT length across seven tasks: arithmetic (numeric & verbal), logic (syllogisms & ALE), relational reasoning, intuitive reasoning, and ARC (3/6)
Neural networks are powerful in-silico models for studying cognition: LLMs and CNNs already capture key behaviors in language and vision. But can they also capture the cognitive demands of human reasoning? (2/6)
19.11.2025 20:14 β π 1 π 0 π¬ 1 π 0Our paper βThe cost of thinking is similar between large reasoning models and humansβ is now out in PNAS! π€π§
w/ @fepdelia.bsky.social, @hopekean.bsky.social, @lampinen.bsky.social, and @evfedorenko.bsky.social
Link: www.pnas.org/doi/10.1073/... (1/6)
Top: A syntax tree for the sentence "the doctor by the lawyer saw the artist". Bottom: A continuous vector.
π€π§ I'll be considering applications for PhD students & postdocs to start at Yale in Fall 2026!
If you are interested in the intersection of linguistics, cognitive science, & AI, I encourage you to apply!
PhD link: rtmccoy.com/prospective_...
Postdoc link: rtmccoy.com/prospective_...
New preprint! w/@drhanjones.bsky.social
Adding human-like memory limitations to transformers improves language learning, but impairs reading time prediction
This supports ideas from cognitive science but complicates the link between architecture and behavioural prediction
arxiv.org/abs/2508.05803
Can't wait for #CCN2025! Drop by to say hi to me / collaborators!
10.08.2025 16:52 β π 27 π 1 π¬ 0 π 0Is the Language of Thought == Language? A Thread π§΅
New Preprint (link: tinyurl.com/LangLOT) with @alexanderfung.bsky.social, Paris Jaggers, Jason Chen, Josh Rule, Yael Benn, @joshtenenbaum.bsky.social, βͺ@spiantado.bsky.socialβ¬, Rosemary Varley, @evfedorenko.bsky.social
1/8
The BLiMP-NL dataset consists of 84 Dutch minimal pair paradigms covering 22 syntactic phenomena, and comes with graded human acceptability ratings & self-paced reading times. An example minimal pair: A. Ik bekijk de foto van mezelf in de kamer (I watch the photograph of myself in the room; grammatical) B. Wij bekijken de foto van mezelf in de kamer (We watch the photograph of myself in the room; ungrammatical) Differences in human acceptability ratings between sentences correlate with differences in model syntactic log-odds ratio scores.
Next week Iβll be in Vienna for my first *ACL conference! π¦πΉβ¨
I will present our new BLiMP-NL dataset for evaluating language models on Dutch syntactic minimal pairs and human acceptability judgments β¬οΈ
ποΈ Tuesday, July 29th, 16:00-17:30, Hall X4 / X5 (Austria Center Vienna)
I'm sharing a Colab notebook on using large language models for cognitive science! GitHub repo: github.com/MarcoCiappar...
It's geared toward psychologists & linguists and covers extracting embeddings, predictability measures, comparing models across languages & modalities (vision). see examples π§΅
π’ New paper out! We show that auditory iconicity is not marginal in English: word sounds often resemble real-world sounds. Using neural networks and sound similarity measures, we crack the myth of arbitrariness.
Read more: link.springer.com/article/10.3...
@andreadevarda.bsky.social
Many LM applications may be formulated as text generation conditional on some (Boolean) constraint.
Generate aβ¦
- Python program that passes a test suite.
- PDDL plan that satisfies a goal.
- CoT trajectory that yields a positive reward.
The list goes onβ¦
How can we efficiently satisfy these? π§΅π
New paper! π§ **The cerebellar components of the human language network**
with: @hsmall.bsky.social @moshepoliak.bsky.social @gretatuckute.bsky.social @benlipkin.bsky.social @awolna.bsky.social @aniladmello.bsky.social and @evfedorenko.bsky.social
www.biorxiv.org/content/10.1...
1/n π§΅
PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN
Some of these words are consistently remembered better than others. Why is that?
In our paper, just published in J. Exp. Psychol., we provide a simple Bayesian account and show that it explains >80% of variance in word memorability: tinyurl.com/yf3md5aj
Excited to share new work on the language system!
Using a large fMRI dataset (n=772) we comprehensively search for language-selective regions across the brain. w/
Aaron Wright, @benlipkin.bsky.social, and @evfedorenko.bsky.social
Link to the preprint: biorxiv.org/content/10.1...
Thread below!ππ§΅
New brain/language study w/ @evfedorenko.bsky.social! We applied task-agnostic individualized functional connectomics (iFC) to the entire history of fMRI scanning in the Fedorenko lab, parcellating nearly 1200 brains into networks based on activity fluctuations alone. doi.org/10.1101/2025... . π§΅
31.03.2025 15:19 β π 43 π 13 π¬ 1 π 21/n Happy to share a new paper with Calogero Zarbo & Marco Marelli! How well do LLMs represent the implicit meaning of familiar and novel compounds? How do they compare with simpler distributional semantics models (DSMs; i.e., word embeddings)?
doi.org/10.1111/cogs...
So excited to have our work on conlangs out in PNAS: www.pnas.org/doi/10.1073/... Congrats, Saima, Maya, and the rest of the crew -- well done!
Here is the MIT news story:
news.mit.edu/2025/esperan...
New preprint w/ @jennhu.bsky.social @kmahowald.bsky.social : Can LLMs introspect about their knowledge of language?
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. π§΅(1/8)