Matt Goldrick's Avatar

Matt Goldrick

@mattgoldrick.bsky.social

Linguistics and cognitive science at Northwestern. Opinions are my own. he/him/his

1,405 Followers  |  958 Following  |  419 Posts  |  Joined: 19.08.2023
Posts Following

Posts by Matt Goldrick (@mattgoldrick.bsky.social)

Deadline March 30 (rolling): Postdoc, computational linguistics and language acquisition w/ H. Dai, Linguistics, U Michigan careers.umich.edu/job_detail/2...

27.02.2026 02:17 — 👍 4    🔁 3    💬 0    📌 0

Tenure track position in the multimodal language department at the MPI!

27.02.2026 02:15 — 👍 1    🔁 2    💬 0    📌 0
Amplifiers of Epistemic Posture Essays and writing on AI

New essay on LLMs and brain rot:
"LLMs do not inevitably corrode thinking. They amplify whatever epistemic posture you bring..."
sbgeoaiphd.github.io/rotating_the...

25.02.2026 14:01 — 👍 110    🔁 16    💬 7    📌 7

📅 Feb 28 | 11AM — Writing Back: Peer Review with Agency workshop w/ @amandadiekman.bsky.social & Dorainne Green

See you there! 🏡 #PeerReview #Psychology"

24.02.2026 14:53 — 👍 5    🔁 3    💬 0    📌 0
Preview
Sociolinguistics in Practice: Interview With Penelope Eckert In this interview, Penelope Eckert discusses her life experiences and career as a linguist. Eckert describes a lifelong fascination with language, from her earliest observations of stylistic variatio...

Lovely interview of Penny Eckert by Annette D'Onofrio - several quotes feel deeply healing to me, but this one in particular really resonated with me: (1/3)

doi.org/10.1111/josl...

24.02.2026 19:21 — 👍 12    🔁 3    💬 1    📌 1

Deadline Mar 26: Asst prof (teaching track), comp. soc. sci. + cogsci, UCSD Cog Sci apol-recruit.ucsd.edu/JPF04461

25.02.2026 04:00 — 👍 2    🔁 2    💬 0    📌 0
OSF

A new preprint, co-authored with @johnwkrakauer.bsky.social:

The Deliberation Taboo

Cognitive science is, nominally, the science of thinking. We argue that the field has no theory of what thinking is and, even worse, that the topic has largely dropped out of focus. 1/

osf.io/preprints/ps...

24.02.2026 13:53 — 👍 136    🔁 52    💬 4    📌 12

Our community has a packed schedule at #SPSP2026 in Chicago! 📍 Come find us:

📅 Feb 27 | 9:30AM — Advancing Inclusion in Publishing panel, organized by @frankikung.bsky.social, moderated by @mattgoldrick.bsky.social, with Eranda Jayawickreme, Alison Ledgerwood, Christian Unkelbach & Yuen Huo

24.02.2026 14:53 — 👍 2    🔁 5    💬 1    📌 0
Preview
Avoiding unintended consequences: science of reading policies may harm deaf children Abstract. Many U.S. policies inspired by the Science of Reading rest on two assumptions: (1) skilled reading always involves automatic mapping between writ

The title of this new article by >30 deaf and hearing authors says it all "Avoiding unintended consequences: Science of reading policies have potential to harm deaf children" doi.org/10.1093/jdsa...

22.02.2026 23:14 — 👍 24    🔁 17    💬 2    📌 1
Preview
CorpusPhon 2 A one-day satellite workshop at LabPhon 20 to bring together researchers using corpus phonetic tools

📣 Phoneticians/phonologists: CorpusPhon is happening again this year on June 29th at LabPhon in Montreal! Submissions due Friday, March 13th. Hope to see you there!

sites.google.com/view/corpusp...

21.02.2026 23:39 — 👍 10    🔁 6    💬 0    📌 1

Congratulations!!!!!!

21.02.2026 00:30 — 👍 1    🔁 0    💬 0    📌 0
Microsabbaticals at Princeton Psychology Microsabbaticals at Princeton Psychology provide a several-week-long visit to our department for early-career faculty. The program focuses on early-career scholars who would benefit from interactions ...

Are you a junior faculty member interested in spending 2-4 weeks at Princeton Psych? Consider applying for our Microsabbatical program! It’s a fully funded visit for professional development and creating long-term collaborations.
psych.princeton.edu/diversity/mi...

18.02.2026 20:04 — 👍 61    🔁 49    💬 0    📌 2

We start our review of applications for our tenure track position in cognitive psychology tomorrow. Cookie wants to remind you to submit your materials.

17.02.2026 18:42 — 👍 10    🔁 5    💬 0    📌 0
Hot Metal Bridge Post-Bac Program | The Dietrich School of Arts & Sciences Graduate Studies | University of Pittsburgh This two-semester post-baccalaureate fellowship program is designed to help talented students from groups traditionally underrepresented in their academic disciplines, including pell eligible, first g...

Know a promising undergrad who wants more time before applying to grad school? Pitt has a funded postbac program for students from underrepresented groups.

This year, my lab will consider applications for solo supervision or to be co-supervised by @mehrgol.bsky.social!

App deadline is March 15!

16.02.2026 20:43 — 👍 40    🔁 48    💬 0    📌 0
Postdoctoral Requisition Details - Jobs@UIOWA: Search and Apply for Jobs at The University of Iowa Jobs@UIOWA: The official place to search and apply for jobs at The University of Iowa.

I am looking to hire 2-3 post-docs over the course of the next few months to work on questions related to cognitive control in humans, broadly construed. EEG, TMS, DBS, sEEG, fMRI or related methodological experience preferred.
Apply here:

jobs.uiowa.edu/jobSearch/po...

Lab website: wessellab.org

13.02.2026 22:54 — 👍 29    🔁 36    💬 1    📌 2
Group photograph of faculty and participants of the very first Cold Spring Harbor summer course on Genetics and Neurobiology of Language in 2014, taken as the sun was going down at the Banbury Campus, Lloyd Harbor.

Group photograph of faculty and participants of the very first Cold Spring Harbor summer course on Genetics and Neurobiology of Language in 2014, taken as the sun was going down at the Banbury Campus, Lloyd Harbor.

Please tell friends & colleagues about our unique course “Genetics & Neurobiology of Language” July 27-Aug 3 2026. Expert tutors, interactive talks, panel discussions, all in a beautiful setting. Scholarships available: meetings.cshl.edu/courses.aspx...
@cshlnews.bsky.social @cshlbanbury.bsky.social

13.02.2026 17:01 — 👍 36    🔁 27    💬 2    📌 1
Preview
Frontiers | Girls just wanna have funds: a new Transparent Reporting Scale for evaluating grant data reporting from funding agencies IntroductionDespite the increasing representation of women in scientific fields, disparities in research funding allocation remain. This inequity deprives ta...

How transparent is your research funder? 🧐 In our latest work we introduce the Transparent Reporting Scale (TRS) to evaluate how funders report grant data. It's time for standardized transparency to bridge the "scissor-shaped curve" in neuroscience. www.frontiersin.org/journals/com...

13.02.2026 13:37 — 👍 16    🔁 9    💬 1    📌 2

Excited for my first AAAS! Come to our session and learn how linguistic diversity is not (currently) a strong suit of AI models. I'll be alternating between my AI-optimist and -pessimist hats. #AAASmtg #linguistics

13.02.2026 15:13 — 👍 4    🔁 1    💬 1    📌 0
"While most AI tries to fix humans 
@simile_ai
 is building AI that understands them.

They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change.

Born out of Stanford generative agent research. Now backed by $100M to turn that into a category.

AI is getting smarter and Simile is making it more human. We're proud to be in their corner."

"While most AI tries to fix humans @simile_ai is building AI that understands them. They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change. Born out of Stanford generative agent research. Now backed by $100M to turn that into a category. AI is getting smarter and Simile is making it more human. We're proud to be in their corner."

A proposed solution is to build generative agents that represent specific individuals (Box 1). One
such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender,
race, region, education, and political ideology; programmed an LLM chatbot to interview each
participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks.
They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the
human participants on the same questionnaires and tasks. Observing a high correspondence between
the responses of the generative agents and their human counterparts, the researchers concluded
that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across
a range of situations [57]. Some researchers propose making generative agents even more representative
by training them on their human counterparts’ ‘emails, messages and social media
posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical
questions about informed consent; see Outstanding questions.) The logic here is that, because
generative agents are built to represent a diverse sample of specific individuals, researchers
could then run thousands of experiments on the generative agents and feel confident that the resultant
data are faithful to the original samples. Researchers could even populate virtual worlds with
generative agents, running large-scale simulations to test interventions and policies (Box 2).
Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness.
By design, generative agents can only represent individuals who consent to sharing sensitive
data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people

A proposed solution is to build generative agents that represent specific individuals (Box 1). One such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender, race, region, education, and political ideology; programmed an LLM chatbot to interview each participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks. They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the human participants on the same questionnaires and tasks. Observing a high correspondence between the responses of the generative agents and their human counterparts, the researchers concluded that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across a range of situations [57]. Some researchers propose making generative agents even more representative by training them on their human counterparts’ ‘emails, messages and social media posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical questions about informed consent; see Outstanding questions.) The logic here is that, because generative agents are built to represent a diverse sample of specific individuals, researchers could then run thousands of experiments on the generative agents and feel confident that the resultant data are faithful to the original samples. Researchers could even populate virtual worlds with generative agents, running large-scale simulations to test interventions and policies (Box 2). Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness. By design, generative agents can only represent individuals who consent to sharing sensitive data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people

with stronger privacy concerns are less likely to consent to such studies. Members of marginalized
groups in the USA, including women, gender minorities, people of color, and disabled people,
have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These
groups have historically faced disproportionate surveillance [61,62] and theft of their biometric
and behavioral data for scientific research [63–65], including training machine learning models
[66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies
touch down in the global south [68]. These entrenched and repeating patterns raise cascading
problems for the generative agents approach: first, members of marginalized groups are
less likely to participate and, second, those who do will be less representative of their groups. Any
attempt to build AI Surrogates that are truly representative of diverse populations will likely face a
hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.

with stronger privacy concerns are less likely to consent to such studies. Members of marginalized groups in the USA, including women, gender minorities, people of color, and disabled people, have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These groups have historically faced disproportionate surveillance [61,62] and theft of their biometric and behavioral data for scientific research [63–65], including training machine learning models [66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies touch down in the global south [68]. These entrenched and repeating patterns raise cascading problems for the generative agents approach: first, members of marginalized groups are less likely to participate and, second, those who do will be less representative of their groups. Any attempt to build AI Surrogates that are truly representative of diverse populations will likely face a hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.

Box 2. Generative agents and simulated worlds
Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects
of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in
the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are
seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo
Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical
research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science,
sociology, economics, political science, computational social science, as well as private industry [9,112–116].
Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and
theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven
policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative
[116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’
[58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data,
but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this
challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using
those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’.
However, there is currently ‘no consensus’ around how proximal is proximal enough [116].
Imp…

Box 2. Generative agents and simulated worlds Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science, sociology, economics, political science, computational social science, as well as private industry [9,112–116]. Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative [116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’ [58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data, but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’. However, there is currently ‘no consensus’ around how proximal is proximal enough [116]. Imp…

Stanford CS researchers just got a huge payday for promising AI agents that can simulate the real world. @mjcrockett.bsky.social and I wrote about these researcher's vision. Screen shotting quite a lengthy part of our paper, because we spent A LOT of time thinking about the paucity of this promise

13.02.2026 14:43 — 👍 82    🔁 24    💬 5    📌 6

And of course, an ultimate mechanism is that scientists do not randomly select their hypotheses or results from an urn of unknowns. This is a strange, naive vision of science, very prevalent in metascience. Selection happens as a function of background knowledge in a domain. We look more closely

13.02.2026 00:41 — 👍 62    🔁 13    💬 5    📌 1

But I do think our technique could be applied to the model in your JPhon paper!

11.02.2026 17:17 — 👍 1    🔁 0    💬 0    📌 0

Very cool! Thanks for the pointers to this! I don't believe the particular analysis technique we used (layerwise relevance propagation) has been extended to transformers like wav2vec 2.0, so one couldn't do it with the existing system -- but a new CNN could be trained on that dataset!

11.02.2026 17:14 — 👍 1    🔁 0    💬 1    📌 0

It would be really interesting to try this approach cross-linguistically, to see differences across classifiers trained on different languages!

11.02.2026 16:22 — 👍 2    🔁 0    💬 1    📌 0

One limitation is that the input to the network is essentially a spectrogram -- so spectral information is what it primarily has access to. That said, the reliance on spectral information cueing vowel reduction is broadly consistent with prior human perceptual studies (Cutler 2005 is a great review)

11.02.2026 16:21 — 👍 2    🔁 0    💬 1    📌 0
Preview
GitHub - ItaiAllouche/minimalPairsLexicalStress: What Does Lexical Stress Look Like?: A reproducible Demo Of Lexical Stress Classification and LRP-based Interpretation What Does Lexical Stress Look Like?: A reproducible Demo Of Lexical Stress Classification and LRP-based Interpretation - ItaiAllouche/minimalPairsLexicalStress

Check out our demo and apply to your own datasets! github.com/ItaiAllouche... 4/4

11.02.2026 14:41 — 👍 4    🔁 0    💬 0    📌 0

We then use minimal pairs to investigate what acoustic properties the CNN is using to make its decision. The results show that this is driven in part by "classical" cues from the phonetics literature, but also points towards novel cues. 3/

11.02.2026 14:41 — 👍 1    🔁 0    💬 1    📌 0
Preview
MLSpeech/lexical_stress_dataset · Datasets at Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

We extract a large dataset of English disyllabic words from conversational and read speech datasets (available here: huggingface.co/datasets/MLS...). We use the non-minimal pairs in this set to train a CNN to distinguish initial and final stress 2/

11.02.2026 14:41 — 👍 0    🔁 0    💬 1    📌 0
Preview
How does a deep neural network look at lexical stress in English words? Despite their success in speech processing, neural networks often operate as black boxes, prompting the following questions: What informs their decisions, and h

Out today! "How Does a Deep Neural Network Look at Lexical Stress in English Words?" w/ I. Allouche, I. Asael, R. Rousso, V. Dassa, A. Bradlow, S.-E. Kim & @keshet.bsky.social doi.org/10.1121/10.0... 1/

11.02.2026 14:41 — 👍 19    🔁 3    💬 3    📌 0

Deadline 15 Feb: 2 postdoc positions, neurocognition of bilingualism, UiT Arctic Uni. Norway. Position (1) www.jobbnorge.no/en/available... Position (2) www.jobbnorge.no/en/available...

09.02.2026 15:24 — 👍 4    🔁 3    💬 0    📌 0

Deadline Mar 20: Postdoc, fNIRS+bilingual language development w/ M. Kalashnikova, Basque Center on Cognition, Brain, + Language calls.bcbl.eu/index.php?op...

05.02.2026 16:19 — 👍 3    🔁 5    💬 1    📌 0