Rachel Ryskin's Avatar

Rachel Ryskin

@ryskin.bsky.social

Cognitive scientist @ UC Merced http://raryskin.github.io PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io she

443 Followers  |  269 Following  |  18 Posts  |  Joined: 24.10.2023  |  2.0018

Latest posts by ryskin.bsky.social on Bluesky

Post image Post image

The first publication of the #ERC project β€˜LaDy’ is a fact and it’s an important one I think:

We show that word processing and meaning prediction is fundamentally different during social interaction compared to using language individually!
πŸ‘€ short 🧡/1

psycnet.apa.org/fulltext/202...
#OpenAccess

10.10.2025 17:12 β€” πŸ‘ 35    πŸ” 9    πŸ’¬ 4    πŸ“Œ 0

As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!

We hope it will be useful to the community & plan to keep expanding it!
1/

29.09.2025 17:33 β€” πŸ‘ 30    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Preview
Planning to be incremental: Scene descriptions reveal meaningful clustering in language production How do speakers plan complex descriptions and then execute those plans? In this work, we attempt to answer this question by asking subjects to describ…

New paper: We argue that linearization in language production is a foraging process, with speakers navigating semantic and spatial clusters. Lead author: Karina Tachihara, former UC Davis postdoc, now faculty at UIUC!

www.sciencedirect.com/science/arti...

22.09.2025 22:44 β€” πŸ‘ 66    πŸ” 13    πŸ’¬ 0    πŸ“Œ 1
University of California | President’s Postdoctoral Fellowship Program

🚨 Postdoc Opportunity PSA! 🚨

πŸ—“οΈ UC President’s Postdoctoral Fellowship Program applications are due Nov. 1 (ppfp.ucop.edu/info/)

Open to anyone interested in a postdoc & academic career at a UC campus.

I'm happy to sponsor an applicant if there’s a good fitβ€” please reach out!

18.09.2025 18:19 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech - Psychonomic Bulletin & Review Researchers have generally assumed that listeners perceive speech compositionally, based on the combined processing of local acoustic–phonetic cues associated with individual linguistic units. Yet, th...

Officially out! "Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech" S.E Kim, B. R. Chernyak, @keshet.bsky.social, me, & A. Bradlow link.springer.com/article/10.3.... BONUS: free online similarity calculator so you can join in the fun! 1/

08.09.2025 16:29 β€” πŸ‘ 29    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!

19.08.2025 01:12 β€” πŸ‘ 51    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1

New paper with @rjantonello.bsky.social @csinva.bsky.social, Suna Guo, Gavin Mischler, Jianfeng Gao, & Nima Mesgarani: We use LLMs to generate VERY interpretable embeddings where each dimension corresponds to a scientific theory, & then use these embeddings to predict fMRI and ECoG. It WORKS!

18.08.2025 18:33 β€” πŸ‘ 16    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Post image

LLM finds it FAR easier to distinguish b/w DO & PO constructions when the lexical & info structure of instances conform more closely w/ the respective constructions (left πŸ‘‡). Where's pure syntax? LLM seems to say "πŸ€·β€β™€οΈ" (right) @SRakshit
adele.scholar.princeton.edu/sites/g/file...

18.08.2025 19:12 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

If you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! 🧡

1/n

09.08.2025 13:46 β€” πŸ‘ 29    πŸ” 13    πŸ’¬ 1    πŸ“Œ 0

(1)πŸ’‘NEW PUBLICATIONπŸ’‘
Word and construction probabilities explain the acceptability of certain long-distance dependency structures

Work with Curtis Chen and Ted Gibson

Link to paper: tedlab.mit.edu/tedlab_websi...

In memory of Curtis Chen.

05.08.2025 13:25 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).

31.07.2025 17:55 β€” πŸ‘ 18    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Preview
Adaptation to noisy language input in real time: Evidence from ERPs Author(s): Li, Jiaxuan; Ortega, Alyssa Viviana; Futrell, Richard; Ryskin, Rachel | Abstract: Language comprehension often deviates from the literal meaning of the input, particularly when errors resem...

Adaptation to noisy language input in real time: Evidence from ERPs
escholarship.org/uc/item/8cm7...

30.07.2025 18:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The Role of Context Gating in Predictive Sentence Processing Author(s): Gokcen, Yasemin; Noelle, David C.; Ryskin, Rachel | Abstract: Prediction is a core computation in language, as humans use preceding context to implicitly make predictions about the upcoming...

The Role of Context Gating in Predictive Sentence Processing.
escholarship.org/uc/item/21w8...

30.07.2025 18:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Language experience and prediction across the lifespan: evidence from diachronic fine-tuning of language models Author(s): Chao, Alton; Cain, Ellis; Ryskin, Rachel | Abstract: Humans predict upcoming language input from context, which depends on prior language experience. This suggests that older adults' predic...

Language experience and prediction across the lifespan: evidence from diachronic fine-tuning of language models.
escholarship.org/uc/item/83b1...

30.07.2025 18:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Efficient Audience Design in LLMs Author(s): Ryskin, Rachel; Gawel, Olivia; Tanzer, Owen; Pailo, Viniccius; Kello, Christopher | Abstract: During human communication, speakers balance informativeness and effort by tailoring their lang...

Efficient Audience Design in LLMs
escholarship.org/uc/item/6zm2...

30.07.2025 18:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Looking forward to seeing everyone at #CogSci2025 this week! Come check out what we’ve been working on in the LInC Lab, along with our fantastic collaborators!

Paper πŸ”— in πŸ§΅πŸ‘‡

30.07.2025 18:28 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Skewed distributions facilitate infants' word segmentation Infants can use statistical patterns to segment continuous speech into words, a crucial task in language acquisition. Experimental studies typically i…

Some happy science news (a small light in times of darkness). New paper out with @luciewolters.bsky.social and Mits Ota: : Skewed distributions facilitate infants’ word segmentation. sciencedirect.com/science/arti...

10.07.2025 20:14 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Thrilled to see this work published β€” and even more thrilled to have been part of such a great collaborative team!

One key takeaway for me: Webcam eye-tracking w/ jsPsych is awesome for 4-quadrant visual world paradigm studies -- less so for displays w/ smaller ROIs.

08.07.2025 21:41 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

New paper w/ @ryskin.bsky.social and Chen Yu: We analyzed parent-child toy play and found that cross-situational learning statistics were present in naturalistic settings!

onlinelibrary.wiley.com/doi/epdf/10....

19.06.2025 18:24 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Post-Doctoral position - Department of Linguistics University of California, Davis is hiring. Apply now!

I'm hiring a postdoc to start this fall! Come work with me? recruit.ucdavis.edu/JPF07123

30.05.2025 01:30 β€” πŸ‘ 25    πŸ” 25    πŸ’¬ 0    πŸ“Œ 1
Video thumbnail

What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractnessβ€”revealing an interpretable, topographic representational basis for language processing shared across individuals

23.05.2025 16:59 β€” πŸ‘ 71    πŸ” 30    πŸ’¬ 3    πŸ“Œ 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled β€œmeta-learning” combines Bayesian inference and neural networks into a β€œprior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled β€œlearning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence β€œcolorless green ideas sleep furiously”).

A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled β€œmeta-learning” combines Bayesian inference and neural networks into a β€œprior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled β€œlearning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence β€œcolorless green ideas sleep furiously”).

πŸ€–πŸ§  Paper out in Nature Communications! πŸ§ πŸ€–

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n

20.05.2025 19:04 β€” πŸ‘ 154    πŸ” 43    πŸ’¬ 4    πŸ“Œ 1

Unfortunately, the NSF grant that supports our work has been terminated. This is a setback, but our mission has not changed. We will continue to work hard on making cognitive science a more inclusive field. Stay tuned for upcoming events.

21.04.2025 19:05 β€” πŸ‘ 263    πŸ” 95    πŸ’¬ 4    πŸ“Œ 6
APA PsycNet

Does the mind degrade or become enriched as we grow old? To explain healthy aging effects, the evidence supports enrichment. Indeed, the evidence suggests changes in crystallized (enrichment) and fluid intelligence (slowing) share a common cause. psycnet.apa.org/record/2026-...

17.04.2025 13:08 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
title of paper (in text) plus author list

title of paper (in text) plus author list

Time course of word recognition for kids at different ages.

Time course of word recognition for kids at different ages.

Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...

14.04.2025 21:58 β€” πŸ‘ 68    πŸ” 27    πŸ’¬ 1    πŸ“Œ 1
from minicons import scorer
from nltk.tokenize import TweetTokenizer

lm = scorer.IncrementalLMScorer("gpt2")

# your own tokenizer function that returns a list of words
# given some sentence input
word_tokenizer = TweetTokenizer().tokenize

# word scoring
lm.word_score_tokenized(
    ["I was a matron in France", "I was a mat in France"], 
    bos_token=True, # needed for GPT-2/Pythia and NOT needed for others
    tokenize_function=word_tokenizer,
    bow_correction=True, # Oh and Schuler correction
    surprisal=True,
    base_two=True
)

'''
First word = -log_2 P(word | <beginning of text>)

[[('I', 6.1522440910339355),
  ('was', 4.033324718475342),
  ('a', 4.879510402679443),
  ('matron', 17.611848831176758),
  ('in', 2.5804288387298584),
  ('France', 9.036953926086426)],
 [('I', 6.1522440910339355),
  ('was', 4.033324718475342),
  ('a', 4.879510402679443),
  ('mat', 19.385351181030273),
  ('in', 6.76780366897583),
  ('France', 10.574726104736328)]]
'''

from minicons import scorer from nltk.tokenize import TweetTokenizer lm = scorer.IncrementalLMScorer("gpt2") # your own tokenizer function that returns a list of words # given some sentence input word_tokenizer = TweetTokenizer().tokenize # word scoring lm.word_score_tokenized( ["I was a matron in France", "I was a mat in France"], bos_token=True, # needed for GPT-2/Pythia and NOT needed for others tokenize_function=word_tokenizer, bow_correction=True, # Oh and Schuler correction surprisal=True, base_two=True ) ''' First word = -log_2 P(word | <beginning of text>) [[('I', 6.1522440910339355), ('was', 4.033324718475342), ('a', 4.879510402679443), ('matron', 17.611848831176758), ('in', 2.5804288387298584), ('France', 9.036953926086426)], [('I', 6.1522440910339355), ('was', 4.033324718475342), ('a', 4.879510402679443), ('mat', 19.385351181030273), ('in', 6.76780366897583), ('France', 10.574726104736328)]] '''

another day another minicons update (potentially a significant one for psycholinguists?)

"Word" scoring is now a thing! You just have to supply your own splitting function!

pip install -U minicons for merriment

02.04.2025 03:35 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 3    πŸ“Œ 0
Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective.

Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective.

Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data
quantity.

Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data quantity.

Paper abstract

Paper abstract

AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?

In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...

06.03.2025 17:39 β€” πŸ‘ 63    πŸ” 25    πŸ’¬ 2    πŸ“Œ 0
Post image

🚨 New Preprint!!

LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignmentβ€”linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. πŸ‘‡πŸ§΅

05.03.2025 15:58 β€” πŸ‘ 59    πŸ” 24    πŸ’¬ 1    πŸ“Œ 2
Preview
Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech - Psychonomic Bulletin & Review Researchers have generally assumed that listeners perceive speech compositionally, based on the combined processing of local acoustic–phonetic cues associated with individual linguistic units. Yet, th...

Out now! "Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech" in Psychonomic Bulletin+Review. led by awesome postdoc Seung-Eun Kim, with B. R. Chernyak, @keshet.bsky.social and A. Bradlow. Supported by NSF. doi.org/10.3758/s134... 1/

28.02.2025 18:33 β€” πŸ‘ 27    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Preview
Expectation violations signal goals in novel human communication - Nature Communications In the absence of language, there is a lack of common knowledge necessary for efficient communication. Here, the authors show that people solve this problem by reverting to commonly accepted physical ...

✨Excited to share that our paper on how expectation violations shape novel human communication has just been published in Nature Communications! ✨

πŸ“– read the full paper: www.nature.com/articles/s41....

🧡 Detailed breakdown bellow! πŸ‘‡

27.02.2025 14:26 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1

@ryskin is following 20 prominent accounts