The first publication of the #ERC project βLaDyβ is a fact and itβs an important one I think:
We show that word processing and meaning prediction is fundamentally different during social interaction compared to using language individually!
π short π§΅/1
psycnet.apa.org/fulltext/202...
#OpenAccess
10.10.2025 17:12 β π 35 π 9 π¬ 4 π 0
As our lab started to build encoding π§ models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!
We hope it will be useful to the community & plan to keep expanding it!
1/
29.09.2025 17:33 β π 30 π 6 π¬ 1 π 0
University of California | Presidentβs Postdoctoral Fellowship Program
π¨ Postdoc Opportunity PSA! π¨
ποΈ UC Presidentβs Postdoctoral Fellowship Program applications are due Nov. 1 (ppfp.ucop.edu/info/)
Open to anyone interested in a postdoc & academic career at a UC campus.
I'm happy to sponsor an applicant if thereβs a good fitβ please reach out!
18.09.2025 18:19 β π 7 π 2 π¬ 0 π 0
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.
In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.
Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
19.08.2025 01:12 β π 51 π 10 π¬ 1 π 1
New paper with @rjantonello.bsky.social @csinva.bsky.social, Suna Guo, Gavin Mischler, Jianfeng Gao, & Nima Mesgarani: We use LLMs to generate VERY interpretable embeddings where each dimension corresponds to a scientific theory, & then use these embeddings to predict fMRI and ECoG. It WORKS!
18.08.2025 18:33 β π 16 π 8 π¬ 1 π 0
LLM finds it FAR easier to distinguish b/w DO & PO constructions when the lexical & info structure of instances conform more closely w/ the respective constructions (left π). Where's pure syntax? LLM seems to say "π€·ββοΈ" (right) @SRakshit
adele.scholar.princeton.edu/sites/g/file...
18.08.2025 19:12 β π 15 π 4 π¬ 0 π 0
If you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! π§΅
1/n
09.08.2025 13:46 β π 29 π 13 π¬ 1 π 0
(1)π‘NEW PUBLICATIONπ‘
Word and construction probabilities explain the acceptability of certain long-distance dependency structures
Work with Curtis Chen and Ted Gibson
Link to paper: tedlab.mit.edu/tedlab_websi...
In memory of Curtis Chen.
05.08.2025 13:25 β π 4 π 1 π¬ 1 π 0
1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).
31.07.2025 17:55 β π 18 π 7 π¬ 1 π 0
Looking forward to seeing everyone at #CogSci2025 this week! Come check out what weβve been working on in the LInC Lab, along with our fantastic collaborators!
Paper π in π§΅π
30.07.2025 18:28 β π 5 π 2 π¬ 1 π 0
Thrilled to see this work published β and even more thrilled to have been part of such a great collaborative team!
One key takeaway for me: Webcam eye-tracking w/ jsPsych is awesome for 4-quadrant visual world paradigm studies -- less so for displays w/ smaller ROIs.
08.07.2025 21:41 β π 4 π 0 π¬ 0 π 0
New paper w/ @ryskin.bsky.social and Chen Yu: We analyzed parent-child toy play and found that cross-situational learning statistics were present in naturalistic settings!
onlinelibrary.wiley.com/doi/epdf/10....
19.06.2025 18:24 β π 4 π 1 π¬ 1 π 0
What are the organizing dimensions of language processing?
We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractnessβrevealing an interpretable, topographic representational basis for language processing shared across individuals
23.05.2025 16:59 β π 71 π 30 π¬ 3 π 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayesβ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled βmeta-learningβ combines Bayesian inference and neural networks into a βprior-trained neural networkβ, described as a neural network that has the priors of a Bayesian model β visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled βlearningβ goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence βcolorless green ideas sleep furiouslyβ).
π€π§ Paper out in Nature Communications! π§ π€
Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?
Our answer: Use meta-learning to distill Bayesian priors into a neural network!
www.nature.com/articles/s41...
1/n
20.05.2025 19:04 β π 154 π 43 π¬ 4 π 1
Unfortunately, the NSF grant that supports our work has been terminated. This is a setback, but our mission has not changed. We will continue to work hard on making cognitive science a more inclusive field. Stay tuned for upcoming events.
21.04.2025 19:05 β π 263 π 95 π¬ 4 π 6
APA PsycNet
Does the mind degrade or become enriched as we grow old? To explain healthy aging effects, the evidence supports enrichment. Indeed, the evidence suggests changes in crystallized (enrichment) and fluid intelligence (slowing) share a common cause. psycnet.apa.org/record/2026-...
17.04.2025 13:08 β π 10 π 4 π¬ 0 π 0
title of paper (in text) plus author list
Time course of word recognition for kids at different ages.
Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...
14.04.2025 21:58 β π 68 π 27 π¬ 1 π 1
from minicons import scorer
from nltk.tokenize import TweetTokenizer
lm = scorer.IncrementalLMScorer("gpt2")
# your own tokenizer function that returns a list of words
# given some sentence input
word_tokenizer = TweetTokenizer().tokenize
# word scoring
lm.word_score_tokenized(
["I was a matron in France", "I was a mat in France"],
bos_token=True, # needed for GPT-2/Pythia and NOT needed for others
tokenize_function=word_tokenizer,
bow_correction=True, # Oh and Schuler correction
surprisal=True,
base_two=True
)
'''
First word = -log_2 P(word | <beginning of text>)
[[('I', 6.1522440910339355),
('was', 4.033324718475342),
('a', 4.879510402679443),
('matron', 17.611848831176758),
('in', 2.5804288387298584),
('France', 9.036953926086426)],
[('I', 6.1522440910339355),
('was', 4.033324718475342),
('a', 4.879510402679443),
('mat', 19.385351181030273),
('in', 6.76780366897583),
('France', 10.574726104736328)]]
'''
another day another minicons update (potentially a significant one for psycholinguists?)
"Word" scoring is now a thing! You just have to supply your own splitting function!
pip install -U minicons for merriment
02.04.2025 03:35 β π 21 π 7 π¬ 3 π 0
Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective.
Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data
quantity.
Paper abstract
AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?
In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
06.03.2025 17:39 β π 63 π 25 π¬ 2 π 0
π¨ New Preprint!!
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignmentβlinguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. ππ§΅
05.03.2025 15:58 β π 59 π 24 π¬ 1 π 2
The mystery of the Universe is its comprehensibility. Computational psycholinguist, I guess, but not a professional account. he\him (pfp cc. Dustin Nguyen)
Psycholinguist//Research group leader @Uni_Marburg
https://www.uni-marburg.de/en/fb09/dsa/research-building/junior-research-group
university of melbourne phd candidate in computational cognitive science. I study learning, strategy, and communication.
merrickgiles.neocities.org
Cognitive development researcher at OSU, foodie, music lover/deadhead, techie, and concerned about the nation. He/him
https://scholar.google.com/citations?user=XEkYhiAAAAAJ&hl=en
https://u.osu.edu/madlab
Social Interaction, Action and Understanding, Body Behavior, Conversation Analysis, Mixed Methods, Cross-Cultural Comparison, Systems Thinking. Associate Professor at UCLA Sociology.
https://meco-read.com is a collaborative international project that brings together researchers from over 40 countries to study reading across different languages and writing systems
PhD Candidate, Psychological & Brain Sciences, Johns Hopkins University
concepts | language | plasticity | development | neuroscience
https://m-hauptman.github.io/
PhD student at Cornell studying intergroup cognition | NSF GRFP fellow | baking enthusiast | she/her
kirstanbrodie.github.io
Philosopher, phenomenologist, and cognitive scientist at
UC Merced. Interested in neural networks and dynamical systems theory. Builder of simbrain.net and husserl.net. Website: https://jeffyoshimi.net/
PhD candidate interested in language, cognition, and computation at @ucirvine.bsky.social
Website: https://shiupadhye.github.io/
Educator, Author of The Daycare Myth: What We Get Wrong About Early Care and Education (and What We Should Do About It)
Assistant professor at Yale Linguistics. Studying computational linguistics, cognitive science, and AI. He/him.
psycholinguist, semanticist, pragmaticist || viα»t, asian american || she / they / chα» / em || asst prof interested in the Internet & language || and cycling
language & mind-ish things (mostly fun sometimes sin), interested in what science is and does, pretend stoic but for-real epicurean (but for-real-real hedonist), we're in this together, la do la si, &c &c
tryna be drunk (comme Baudelaire)
Assistant professor with too many opinions. Texpat, politics academia, gay stuff, anti-carbrain. Not an AI brain genius guy. "Train Linguist." π΅πΈπ³οΈβππΊπ¦π³οΈββ§οΈ
"See you divas on the streets."
E me aperta pra eu quase sufocar
BayesForDays@lingo.lol on Mastodon.
PhD student @ Georgia Tech π and LIT Lab. Interested in how brains and machines learn, organize, and use knowledge about the world.
https://w-decker.github.io/
Pre-doc researcher
Linguistic | Clinic Linguistic | Cognitive Semantic
The Cognitive & Information Sciences Department at UC Merced, offering cutting edge interdisciplinary undergraduate and graduate training at the University of California's 10th campus.
MIT Brain and Cognitive Sciences
Just PhD-ed at University of Maryland. Peking U alum. Interested in how we represent objects, structures and aesthetics cross language and vision, with EEG/SQUID & OPM MEG/.... Personal website: https://sites.google.com/view/xinchiyu/home