Rachel Ryskin's Avatar

Rachel Ryskin

@ryskin.bsky.social

Cognitive scientist @ UC Merced http://raryskin.github.io PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io she

431 Followers  |  266 Following  |  17 Posts  |  Joined: 24.10.2023  |  2.1904

Latest posts by ryskin.bsky.social on Bluesky

(1)πŸ’‘NEW PUBLICATIONπŸ’‘
Word and construction probabilities explain the acceptability of certain long-distance dependency structures

Work with Curtis Chen and Ted Gibson

Link to paper: tedlab.mit.edu/tedlab_websi...

In memory of Curtis Chen.

05.08.2025 13:25 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).

31.07.2025 17:55 β€” πŸ‘ 18    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Preview
Adaptation to noisy language input in real time: Evidence from ERPs Author(s): Li, Jiaxuan; Ortega, Alyssa Viviana; Futrell, Richard; Ryskin, Rachel | Abstract: Language comprehension often deviates from the literal meaning of the input, particularly when errors resem...

Adaptation to noisy language input in real time: Evidence from ERPs
escholarship.org/uc/item/8cm7...

30.07.2025 18:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The Role of Context Gating in Predictive Sentence Processing Author(s): Gokcen, Yasemin; Noelle, David C.; Ryskin, Rachel | Abstract: Prediction is a core computation in language, as humans use preceding context to implicitly make predictions about the upcoming...

The Role of Context Gating in Predictive Sentence Processing.
escholarship.org/uc/item/21w8...

30.07.2025 18:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Language experience and prediction across the lifespan: evidence from diachronic fine-tuning of language models Author(s): Chao, Alton; Cain, Ellis; Ryskin, Rachel | Abstract: Humans predict upcoming language input from context, which depends on prior language experience. This suggests that older adults' predic...

Language experience and prediction across the lifespan: evidence from diachronic fine-tuning of language models.
escholarship.org/uc/item/83b1...

30.07.2025 18:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Efficient Audience Design in LLMs Author(s): Ryskin, Rachel; Gawel, Olivia; Tanzer, Owen; Pailo, Viniccius; Kello, Christopher | Abstract: During human communication, speakers balance informativeness and effort by tailoring their lang...

Efficient Audience Design in LLMs
escholarship.org/uc/item/6zm2...

30.07.2025 18:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Looking forward to seeing everyone at #CogSci2025 this week! Come check out what we’ve been working on in the LInC Lab, along with our fantastic collaborators!

Paper πŸ”— in πŸ§΅πŸ‘‡

30.07.2025 18:28 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Skewed distributions facilitate infants' word segmentation Infants can use statistical patterns to segment continuous speech into words, a crucial task in language acquisition. Experimental studies typically i…

Some happy science news (a small light in times of darkness). New paper out with @luciewolters.bsky.social and Mits Ota: : Skewed distributions facilitate infants’ word segmentation. sciencedirect.com/science/arti...

10.07.2025 20:14 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Thrilled to see this work published β€” and even more thrilled to have been part of such a great collaborative team!

One key takeaway for me: Webcam eye-tracking w/ jsPsych is awesome for 4-quadrant visual world paradigm studies -- less so for displays w/ smaller ROIs.

08.07.2025 21:41 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

New paper w/ @ryskin.bsky.social and Chen Yu: We analyzed parent-child toy play and found that cross-situational learning statistics were present in naturalistic settings!

onlinelibrary.wiley.com/doi/epdf/10....

19.06.2025 18:24 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Post-Doctoral position - Department of Linguistics University of California, Davis is hiring. Apply now!

I'm hiring a postdoc to start this fall! Come work with me? recruit.ucdavis.edu/JPF07123

30.05.2025 01:30 β€” πŸ‘ 25    πŸ” 25    πŸ’¬ 0    πŸ“Œ 1
Video thumbnail

What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractnessβ€”revealing an interpretable, topographic representational basis for language processing shared across individuals

23.05.2025 16:59 β€” πŸ‘ 71    πŸ” 30    πŸ’¬ 3    πŸ“Œ 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled β€œmeta-learning” combines Bayesian inference and neural networks into a β€œprior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled β€œlearning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence β€œcolorless green ideas sleep furiously”).

A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled β€œmeta-learning” combines Bayesian inference and neural networks into a β€œprior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled β€œlearning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence β€œcolorless green ideas sleep furiously”).

πŸ€–πŸ§  Paper out in Nature Communications! πŸ§ πŸ€–

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n

20.05.2025 19:04 β€” πŸ‘ 154    πŸ” 43    πŸ’¬ 4    πŸ“Œ 1

Unfortunately, the NSF grant that supports our work has been terminated. This is a setback, but our mission has not changed. We will continue to work hard on making cognitive science a more inclusive field. Stay tuned for upcoming events.

21.04.2025 19:05 β€” πŸ‘ 265    πŸ” 95    πŸ’¬ 4    πŸ“Œ 6
APA PsycNet

Does the mind degrade or become enriched as we grow old? To explain healthy aging effects, the evidence supports enrichment. Indeed, the evidence suggests changes in crystallized (enrichment) and fluid intelligence (slowing) share a common cause. psycnet.apa.org/record/2026-...

17.04.2025 13:08 β€” πŸ‘ 9    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
title of paper (in text) plus author list

title of paper (in text) plus author list

Time course of word recognition for kids at different ages.

Time course of word recognition for kids at different ages.

Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...

14.04.2025 21:58 β€” πŸ‘ 68    πŸ” 27    πŸ’¬ 1    πŸ“Œ 1
from minicons import scorer
from nltk.tokenize import TweetTokenizer

lm = scorer.IncrementalLMScorer("gpt2")

# your own tokenizer function that returns a list of words
# given some sentence input
word_tokenizer = TweetTokenizer().tokenize

# word scoring
lm.word_score_tokenized(
    ["I was a matron in France", "I was a mat in France"], 
    bos_token=True, # needed for GPT-2/Pythia and NOT needed for others
    tokenize_function=word_tokenizer,
    bow_correction=True, # Oh and Schuler correction
    surprisal=True,
    base_two=True
)

'''
First word = -log_2 P(word | <beginning of text>)

[[('I', 6.1522440910339355),
  ('was', 4.033324718475342),
  ('a', 4.879510402679443),
  ('matron', 17.611848831176758),
  ('in', 2.5804288387298584),
  ('France', 9.036953926086426)],
 [('I', 6.1522440910339355),
  ('was', 4.033324718475342),
  ('a', 4.879510402679443),
  ('mat', 19.385351181030273),
  ('in', 6.76780366897583),
  ('France', 10.574726104736328)]]
'''

from minicons import scorer from nltk.tokenize import TweetTokenizer lm = scorer.IncrementalLMScorer("gpt2") # your own tokenizer function that returns a list of words # given some sentence input word_tokenizer = TweetTokenizer().tokenize # word scoring lm.word_score_tokenized( ["I was a matron in France", "I was a mat in France"], bos_token=True, # needed for GPT-2/Pythia and NOT needed for others tokenize_function=word_tokenizer, bow_correction=True, # Oh and Schuler correction surprisal=True, base_two=True ) ''' First word = -log_2 P(word | <beginning of text>) [[('I', 6.1522440910339355), ('was', 4.033324718475342), ('a', 4.879510402679443), ('matron', 17.611848831176758), ('in', 2.5804288387298584), ('France', 9.036953926086426)], [('I', 6.1522440910339355), ('was', 4.033324718475342), ('a', 4.879510402679443), ('mat', 19.385351181030273), ('in', 6.76780366897583), ('France', 10.574726104736328)]] '''

another day another minicons update (potentially a significant one for psycholinguists?)

"Word" scoring is now a thing! You just have to supply your own splitting function!

pip install -U minicons for merriment

02.04.2025 03:35 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 3    πŸ“Œ 0
Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective.

Figure 1. A schematic depiction of a model-mechanism mapping between a human learning system (left side) and a cognitive model (right side). Candidate model mechanism mappings are pictured as mapping between representations but also can be in terms of input data, architecture, or learning objective.

Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data
quantity.

Figure 2. Data efficiency in human learning. (left) Order of magnitude of LLM vs. human training data, plotted by human age. Ranges are approximated from Frank (2023a). (right) A schematic depiction of evaluation scaling curves for human learners vs. models plotted by training data quantity.

Paper abstract

Paper abstract

AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?

In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...

06.03.2025 17:39 β€” πŸ‘ 63    πŸ” 25    πŸ’¬ 2    πŸ“Œ 0
Post image

🚨 New Preprint!!

LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignmentβ€”linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. πŸ‘‡πŸ§΅

05.03.2025 15:58 β€” πŸ‘ 56    πŸ” 24    πŸ’¬ 1    πŸ“Œ 1
Preview
Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech - Psychonomic Bulletin & Review Researchers have generally assumed that listeners perceive speech compositionally, based on the combined processing of local acoustic–phonetic cues associated with individual linguistic units. Yet, th...

Out now! "Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech" in Psychonomic Bulletin+Review. led by awesome postdoc Seung-Eun Kim, with B. R. Chernyak, @keshet.bsky.social and A. Bradlow. Supported by NSF. doi.org/10.3758/s134... 1/

28.02.2025 18:33 β€” πŸ‘ 28    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0
Preview
Expectation violations signal goals in novel human communication - Nature Communications In the absence of language, there is a lack of common knowledge necessary for efficient communication. Here, the authors show that people solve this problem by reverting to commonly accepted physical ...

✨Excited to share that our paper on how expectation violations shape novel human communication has just been published in Nature Communications! ✨

πŸ“– read the full paper: www.nature.com/articles/s41....

🧡 Detailed breakdown bellow! πŸ‘‡

27.02.2025 14:26 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Depiction comparing standard views of statistical learning with a sponge and the new information foraging view with an octopus

Depiction comparing standard views of statistical learning with a sponge and the new information foraging view with an octopus

What is human #StatisticalLearning for? The standard assumption is that the goal of SL is to learn the regularities in the environment to guide behavior. In our new Psych Review paper, we argue that SL instead is provides the basis for novelty detection within an information foraging system
1/2

27.02.2025 14:06 β€” πŸ‘ 55    πŸ” 22    πŸ’¬ 3    πŸ“Œ 0
Preview
The ECOLANG Multimodal Corpus of adult-child and adult-adult Language Scientific Data - The ECOLANG Multimodal Corpus of adult-child and adult-adult Language

Inaugural post on bsky: The ECOLANG Multimodal Corpus, providing audiovisual recordings and annotations of multimodal communicative behaviours by English-speaking adults in dyadic interaction with a child or another adult is now available rdcu.be/eblMF

26.02.2025 10:19 β€” πŸ‘ 49    πŸ” 23    πŸ’¬ 1    πŸ“Œ 0
Image of cover of forthcoming More Than Words: How Talking Sharpens the Mind and Shapes Our World, by Maryellen MacDonald

Image of cover of forthcoming More Than Words: How Talking Sharpens the Mind and Shapes Our World, by Maryellen MacDonald

My new book, MORE THAN WORDS (Avery/PenguinRandomHouse) arrives 6/3! It tells the story of how we produce language & how talking shapes our lives in surprising ways. It's psyling for gen'l audiences! Info & preorders www.penguinrandomhouse.com/books/724046/more-than-words-by-maryellen-macdonald-phd/

24.02.2025 15:14 β€” πŸ‘ 56    πŸ” 13    πŸ’¬ 4    πŸ“Œ 7
Preview
Pragmatics as Social Inference About Intentional Action Abstract. Pragmatic inferences are based on assumptions about how speakers communicate: speakers are taken to be cooperative and rational; they consider alternatives and make intentional choices to pr...

πŸ“ƒPragmatics as Social Inference About Intentional Action

New paper with @mcxfrank.bsky.social in Open Mind

We show that pragmatic inferences

- work w/o language
- take into account senders' epistemic states
- are conditional on intentional production of signals

direct.mit.edu/opmi/article...

20.02.2025 08:27 β€” πŸ‘ 17    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - karthik/wesanderson: A Wes Anderson color palette for R A Wes Anderson color palette for R. Contribute to karthik/wesanderson development by creating an account on GitHub.

Thanks!

The main color palette is Zissou1 from the awesome Wes Anderson color palette R package: github.com/karthik/wesa...

07.02.2025 17:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Whale song shows language-like statistical structure Humpback whale song is a culturally transmitted behavior. Human language, which is also culturally transmitted, has statistically coherent parts whose frequency distribution follows a power law. These...

SO very excited about new paper with @simonkirby.bsky.social and @ellengarland.bsky.social: We used infant-inspired tools to analyze eight years of humpback whale song, finding recurring parts with a Zipfian frequency distribution. www.science.org/doi/10.1126/...

06.02.2025 21:13 β€” πŸ‘ 37    πŸ” 12    πŸ’¬ 0    πŸ“Œ 1

Thanks, Jamie!

Sounds like you were the best kind of reviewer :)

06.02.2025 21:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

6/ These findings suggest that some aphasia-related comprehension challenges may be due to altered *expectations about noise* rather than a purely syntactic deficit. More broadly, understanding language processing in aphasia through a noisy-channel lens could inform new approaches to treatment.

06.02.2025 21:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

5/ What about individuals with aphasia?
We found:

βœ… They rely more on noisy-channel inferences than healthy adults, even though we account for differences in guessing between populations using a hierarchical mixture model.

πŸ”Ή Unlike healthy adults, their ability to adapt to noise remains unclear.

06.02.2025 21:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@ryskin is following 20 prominent accounts