(1)π‘NEW PUBLICATIONπ‘
Word and construction probabilities explain the acceptability of certain long-distance dependency structures
Work with Curtis Chen and Ted Gibson
Link to paper: tedlab.mit.edu/tedlab_websi...
In memory of Curtis Chen.
05.08.2025 13:25 β π 4 π 1 π¬ 1 π 0
1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).
31.07.2025 17:55 β π 18 π 7 π¬ 1 π 0
Looking forward to seeing everyone at #CogSci2025 this week! Come check out what weβve been working on in the LInC Lab, along with our fantastic collaborators!
Paper π in π§΅π
30.07.2025 18:28 β π 5 π 2 π¬ 1 π 0
Thrilled to see this work published β and even more thrilled to have been part of such a great collaborative team!
One key takeaway for me: Webcam eye-tracking w/ jsPsych is awesome for 4-quadrant visual world paradigm studies -- less so for displays w/ smaller ROIs.
08.07.2025 21:41 β π 4 π 0 π¬ 0 π 0
New paper w/ @ryskin.bsky.social and Chen Yu: We analyzed parent-child toy play and found that cross-situational learning statistics were present in naturalistic settings!
onlinelibrary.wiley.com/doi/epdf/10....
19.06.2025 18:24 β π 4 π 1 π¬ 1 π 0
What are the organizing dimensions of language processing?
We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractnessβrevealing an interpretable, topographic representational basis for language processing shared across individuals
23.05.2025 16:59 β π 71 π 30 π¬ 3 π 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayesβ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled βmeta-learningβ combines Bayesian inference and neural networks into a βprior-trained neural networkβ, described as a neural network that has the priors of a Bayesian model β visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled βlearningβ goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence βcolorless green ideas sleep furiouslyβ).
π€π§ Paper out in Nature Communications! π§ π€
Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?
Our answer: Use meta-learning to distill Bayesian priors into a neural network!
www.nature.com/articles/s41...
1/n
20.05.2025 19:04 β π 154 π 43 π¬ 4 π 1
Unfortunately, the NSF grant that supports our work has been terminated. This is a setback, but our mission has not changed. We will continue to work hard on making cognitive science a more inclusive field. Stay tuned for upcoming events.
21.04.2025 19:05 β π 265 π 95 π¬ 4 π 6
APA PsycNet
Does the mind degrade or become enriched as we grow old? To explain healthy aging effects, the evidence supports enrichment. Indeed, the evidence suggests changes in crystallized (enrichment) and fluid intelligence (slowing) share a common cause. psycnet.apa.org/record/2026-...
17.04.2025 13:08 β π 9 π 4 π¬ 0 π 0
title of paper (in text) plus author list
Time course of word recognition for kids at different ages.
Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...
14.04.2025 21:58 β π 68 π 27 π¬ 1 π 1
from minicons import scorer
from nltk.tokenize import TweetTokenizer
lm = scorer.IncrementalLMScorer("gpt2")
# your own tokenizer function that returns a list of words
# given some sentence input
word_tokenizer = TweetTokenizer().tokenize
# word scoring
lm.word_score_tokenized(
["I was a matron in France", "I was a mat in France"],
bos_token=True, # needed for GPT-2/Pythia and NOT needed for others
tokenize_function=word_tokenizer,
bow_correction=True, # Oh and Schuler correction
surprisal=True,
base_two=True
)
'''
First word = -log_2 P(word | <beginning of text>)
[[('I', 6.1522440910339355),
('was', 4.033324718475342),
('a', 4.879510402679443),
('matron', 17.611848831176758),
('in', 2.5804288387298584),
('France', 9.036953926086426)],
[('I', 6.1522440910339355),
('was', 4.033324718475342),
('a', 4.879510402679443),
('mat', 19.385351181030273),
('in', 6.76780366897583),
('France', 10.574726104736328)]]
'''
another day another minicons update (potentially a significant one for psycholinguists?)
"Word" scoring is now a thing! You just have to supply your own splitting function!
pip install -U minicons for merriment
02.04.2025 03:35 β π 21 π 7 π¬ 3 π 0
Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective.
Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data
quantity.
Paper abstract
AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?
In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
06.03.2025 17:39 β π 63 π 25 π¬ 2 π 0
π¨ New Preprint!!
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignmentβlinguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. ππ§΅
05.03.2025 15:58 β π 56 π 24 π¬ 1 π 1
Depiction comparing standard views of statistical learning with a sponge and the new information foraging view with an octopus
What is human #StatisticalLearning for? The standard assumption is that the goal of SL is to learn the regularities in the environment to guide behavior. In our new Psych Review paper, we argue that SL instead is provides the basis for novelty detection within an information foraging system
1/2
27.02.2025 14:06 β π 55 π 22 π¬ 3 π 0
The ECOLANG Multimodal Corpus of adult-child and adult-adult Language
Scientific Data - The ECOLANG Multimodal Corpus of adult-child and adult-adult Language
Inaugural post on bsky: The ECOLANG Multimodal Corpus, providing audiovisual recordings and annotations of multimodal communicative behaviours by English-speaking adults in dyadic interaction with a child or another adult is now available rdcu.be/eblMF
26.02.2025 10:19 β π 49 π 23 π¬ 1 π 0
Image of cover of forthcoming More Than Words: How Talking Sharpens the Mind and Shapes Our World, by Maryellen MacDonald
My new book, MORE THAN WORDS (Avery/PenguinRandomHouse) arrives 6/3! It tells the story of how we produce language & how talking shapes our lives in surprising ways. It's psyling for gen'l audiences! Info & preorders www.penguinrandomhouse.com/books/724046/more-than-words-by-maryellen-macdonald-phd/
24.02.2025 15:14 β π 56 π 13 π¬ 4 π 7
Pragmatics as Social Inference About Intentional Action
Abstract. Pragmatic inferences are based on assumptions about how speakers communicate: speakers are taken to be cooperative and rational; they consider alternatives and make intentional choices to pr...
πPragmatics as Social Inference About Intentional Action
New paper with @mcxfrank.bsky.social in Open Mind
We show that pragmatic inferences
- work w/o language
- take into account senders' epistemic states
- are conditional on intentional production of signals
direct.mit.edu/opmi/article...
20.02.2025 08:27 β π 17 π 4 π¬ 0 π 0
Thanks, Jamie!
Sounds like you were the best kind of reviewer :)
06.02.2025 21:45 β π 1 π 0 π¬ 0 π 0
6/ These findings suggest that some aphasia-related comprehension challenges may be due to altered *expectations about noise* rather than a purely syntactic deficit. More broadly, understanding language processing in aphasia through a noisy-channel lens could inform new approaches to treatment.
06.02.2025 21:10 β π 0 π 0 π¬ 2 π 0
5/ What about individuals with aphasia?
We found:β¨
β
They rely more on noisy-channel inferences than healthy adults, even though we account for differences in guessing between populations using a hierarchical mixture model.β¨
πΉ Unlike healthy adults, their ability to adapt to noise remains unclear.
06.02.2025 21:10 β π 0 π 0 π¬ 1 π 0
Cognitive development researcher at OSU, foodie, music lover/deadhead, techie, and concerned about the nation. He/him
https://scholar.google.com/citations?user=XEkYhiAAAAAJ&hl=en
https://u.osu.edu/madlab
Social Interaction, Action and Understanding, Body Behavior, Conversation Analysis, Mixed Methods, Cross-Cultural Comparison, Systems Thinking. Associate Professor at UCLA Sociology.
https://meco-read.com is a collaborative international project that brings together researchers from over 40 countries to study reading across different languages and writing systems
PhD Candidate, Psychological & Brain Sciences, Johns Hopkins University
concepts | language | plasticity | development | neuroscience
PhD student at Cornell studying intergroup cognition | baking enthusiast | she/her
kirstanbrodie.github.io
Philosopher, phenomenologist, and cognitive scientist at
UC Merced. Interested in neural networks and dynamical systems theory. Builder of simbrain.net and husserl.net. Website: https://jeffyoshimi.net/
PhD Candidate @UCIrvine
Computational (Psycho)linguistics & Cognitive Science
https://shiupadhye.github.io/
Educator, Author of The Daycare Myth: What We Get Wrong About Early Care and Education (and What We Should Do About It)
Assistant professor at Yale Linguistics. Studying computational linguistics, cognitive science, and AI. He/him.
psycholinguist, semanticist, pragmaticist || viα»t, asian american || she / they / chα» / em || asst prof interested in the Internet & language || and cycling
language & mind-ish things (mostly fun sometimes sin), interested in what science is and does, pretend stoic but for-real epicurean (but for-real-real hedonist), we're in this together, la do la si, &c &c
tryna be drunk (comme Baudelaire)
Assistant professor with too many opinions. Texpat, politics academia, gay stuff, anti-carbrain. Not an AI brain genius guy. π΅πΈπ³οΈβππΊπ¦π³οΈββ§οΈ
"See you divas on the streets."
E me aperta pra eu quase sufocar
BayesForDays@lingo.lol on Mastodon.
PhD student Georgia Tech π and LIT Lab. Interested in how brains and machines learn, organize, and use knowledge about the world.
https://w-decker.github.io/
Pre-doc researcher
Linguistic | Clinic Linguistic | Cognitive Semantic
The Cognitive & Information Sciences Department at UC Merced, offering cutting edge interdisciplinary undergraduate and graduate training at the University of California's 10th campus.
MIT Brain and Cognitive Sciences
Just PhD-ed at University of Maryland. Peking U alum. Interested in how we represent objects, structures and aesthetics cross language and vision, with EEG/SQUID & OPM MEG/.... Personal website: https://sites.google.com/view/xinchiyu/home
Assistant Professor of Psycholinguistics/Neurolinguistics at Cyprus University of Technology.
www.fyndanis.com
UCLA Associate Professor, PhD Researcher of brains π§ (development, stem cells, neuroinflammation, autism, sensory processing, brain injury & repair)
Teacher of Neuroanatomy, Neurophilosophy (consciousness, cognitive science), & Stem Cell Biology
PhD student at Harvard/MIT working with @evfedorenko.bsky.social @nancykanwisher.bsky.social | interested in neuroscience, language, AI | @kempnerinstitute.bsky.social @mitbcs.bsky.social | coltoncasto.github.io