Rachel Ryskin's Avatar

Rachel Ryskin

@ryskin.bsky.social

Cognitive scientist @ UC Merced http://raryskin.github.io PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io

493 Followers  |  320 Following  |  22 Posts  |  Joined: 24.10.2023  |  2.0807

Latest posts by ryskin.bsky.social on Bluesky

Preview
Distinct neuronal populations in the human brain combine content and context - Nature Single-neuron recordings in humans reveal largely separate content and context neurons whose coordinated activity flexibly places memory items in context.

Recently published in @nature.com :the human brain stores what happened and the context in mostly separate neuronsβ€”binding them only when needed, which enables flexible memory (and hopefully avoids confusion) πŸ§ͺ www.nature.com/articles/s41...

20.01.2026 20:50 β€” πŸ‘ 16    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Apes Share Human Ability to Imagine
YouTube video by Johns Hopkins University Apes Share Human Ability to Imagine

Imagination in bonobos!

I am thrilled to share a new paper w/ Amalia Bastos, out now in @science.org

We provide the first experimental evidence that a nonhuman animal can follow along a pretend scenario & track imaginary objects. Work w/ Kanzi, the bonobo, at Ape Initiative

youtu.be/NUSHcQQz2Ko

05.02.2026 19:18 β€” πŸ‘ 274    πŸ” 109    πŸ’¬ 9    πŸ“Œ 10

It was such a treat for us. Thanks for making the trip down and sharing your fascinating work!

04.02.2026 03:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How do diverse context structures reshape representations in LLMs?
In our new work, we explore this via representational straightening. We found LLMs are like a Swiss Army knife: they select different computational mechanisms reflected in different representational structures. 1/

04.02.2026 02:54 β€” πŸ‘ 38    πŸ” 11    πŸ’¬ 1    πŸ“Œ 1

The Visual Learning Lab is hiring TWO lab coordinators!

Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)β€”with flexible summer start dates.

30.01.2026 23:21 β€” πŸ‘ 47    πŸ” 41    πŸ’¬ 1    πŸ“Œ 0
Post image

The cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network!
authors.elsevier.com/a/1mUU83BtfH...
1/n πŸ§΅πŸ‘‡

22.01.2026 17:21 β€” πŸ‘ 68    πŸ” 20    πŸ’¬ 2    πŸ“Œ 4
Video thumbnail

Interpreting EEG requires understanding how the skull smears electrical fields as they propagate from the cortex. I made a browser-based simulator for my EEG class to visualize how dipole depth/orientation change the topomap.
dbrang.github.io/EEG-Dipole-D...

Github page: github.com/dbrang/EEG-D...

20.01.2026 17:00 β€” πŸ‘ 122    πŸ” 49    πŸ’¬ 4    πŸ“Œ 1
Preview
Cultural Transmission Promotes the Emergence of Statistical Properties That Support Language Learning Language is passed across generations through cultural transmission. Prior experimental work, where participants reproduced sets of non-linguistic sequences in transmission chains, shows that this pr...

New paper with @inbalarnon.bsky.social and @simonkirby.bsky.social! Learnability pressures drive the emergence of core statistical properties of language–e.g. Zipf's laws–in an iterated sequence learning experiment, with learners’ RTs indicating sensitivity to the emerging sequence information.

06.01.2026 14:39 β€” πŸ‘ 6    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

Thanks, Jamie! Intergenerational communication is something we’ve been interested in too. Would love to chat!

03.01.2026 17:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Does our "semantic space" get stuck in the past as we age?

New work by @ellscain.bsky.social uses historical embeddings + behavioral data to show we are truly lifelong learners.

Older adults don't rely on historical meaningsβ€”they update them to match current language! 🧠✨

doi.org/10.1162/OPMI...

02.01.2026 19:33 β€” πŸ‘ 48    πŸ” 9    πŸ’¬ 3    πŸ“Œ 1

A quick read to start off 2026…

01.01.2026 18:44 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

I may be a *little* biased but this πŸ“˜ is GREAT! If you ever found language structure interesting, but were turned off by implausible and overly complicated accounts, this book is 4U: a simple and empirically grounded account of the syntax of natural lgs. A must-read for lang researchers+aficionados!

24.12.2025 20:42 β€” πŸ‘ 52    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0

New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.

This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...

24.12.2025 19:55 β€” πŸ‘ 122    πŸ” 41    πŸ’¬ 2    πŸ“Œ 3
Preview
A distinct set of brain areas process prosody--the melody of speech Human speech carries information beyond the words themselves: pitch, loudness, duration, and pauses--jointly referred to as 'prosody'--emphasize critical words, help group words into phrases, and conv...

New preprint on prosody in the brain!
tinyurl.com/2ndswjwu
HeeSoKim NiharikaJhingan SaraSwords @hopekean.bsky.social @coltoncasto.bsky.social JenniferCole @evfedorenko.bsky.social

Prosody areas are distinct from pitch, speech, and multiple-demand areas, and partly overlap with lang+social areasβ†’πŸ§΅

15.12.2025 19:27 β€” πŸ‘ 35    πŸ” 13    πŸ’¬ 1    πŸ“Œ 3
Preview
The MIT Press and Open Mind partner with Lyrasis to support diamond open access publishing through the Open Access Community Investment Program The Open Access Community Investment Program (OACIP), an innovative model for community action, will seek support for MIT Press journal Open Mind through July 2026

The Press and @openmindjournal.bsky.social are pleased to announce a partnership with Lyrasis through the Open Access Community Investment Program (OACIP).

Learn how your institution can support this initiative to continue providing the latest #cogsci researchβ€”free of chargeβ€”here: bit.ly/452nMma

11.12.2025 14:30 β€” πŸ‘ 22    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Preview
Semantic reasoning takes place largely outside the language network The brain's language network is often implicated in the representation and manipulation of abstract semantic knowledge. However, this view is inconsistent with a large body of evidence suggesting that...

The last chapter of my PhD (expanded) is finally out as a preprint!

β€œSemantic reasoning takes place largely outside the language network” 🧠🧐

www.biorxiv.org/content/10.6...

What is semantic reasoning? Read on! πŸ§΅πŸ‘‡

11.12.2025 18:34 β€” πŸ‘ 88    πŸ” 25    πŸ’¬ 2    πŸ“Œ 4

Using a large-scale individual differences investigation (with ~800 participants each performing an ~8-hour battery of non-literal comprehension tasks), we found that pragmatic language use fractionates into 3 components: Social conventions, intonation, and world knowledge–based causal reasoning.

09.12.2025 20:10 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
A figure demonstrating the different aspects of the corpus described in the tweet. There is a main isomorphic 3D view of a level in the Portal 2 co-op game, with some portals, lasers, and the blue and orange players. Inset, there are first-person captures of the blue and orange player views. There is also a box containing the transcribed dialogue with timestamps and labels for the discursive acts. Finally, there is a box containing a task and a list of subtasks. Some subtasks are already crossed out, with the time that they have been completed. The last subtask ("Player 2 places portal 4 on wall 4") is marked incomplete.

The dialogue is as follows:

Blue: Can you put your other portal up here? (tagged as directive)
Orange: Where? (tagged as request for clarification)
Blue: On uh, on this wall. (tagged as directive)
Blue: So that it uh points at the circle. (tagged as directive)
Orange: Okay. (tagged as commit)

The full list of subtasks is:

Task: Redirect lasers
Subtask: Player 1 places portal 1 on wall 1. (completed)
Subtask: Player 1 polaces portal 2 on wall 2 or 3. (completed)
Subtask: Player 2 places portal 3 opposite of portal 2. (completed)
Subtask: Player 2 places portal 4 on wall 4. (incomplete)

A figure demonstrating the different aspects of the corpus described in the tweet. There is a main isomorphic 3D view of a level in the Portal 2 co-op game, with some portals, lasers, and the blue and orange players. Inset, there are first-person captures of the blue and orange player views. There is also a box containing the transcribed dialogue with timestamps and labels for the discursive acts. Finally, there is a box containing a task and a list of subtasks. Some subtasks are already crossed out, with the time that they have been completed. The last subtask ("Player 2 places portal 4 on wall 4") is marked incomplete. The dialogue is as follows: Blue: Can you put your other portal up here? (tagged as directive) Orange: Where? (tagged as request for clarification) Blue: On uh, on this wall. (tagged as directive) Blue: So that it uh points at the circle. (tagged as directive) Orange: Okay. (tagged as commit) The full list of subtasks is: Task: Redirect lasers Subtask: Player 1 places portal 1 on wall 1. (completed) Subtask: Player 1 polaces portal 2 on wall 2 or 3. (completed) Subtask: Player 2 places portal 3 opposite of portal 2. (completed) Subtask: Player 2 places portal 4 on wall 4. (incomplete)

A couple years (!) in the making: we’re releasing a new corpus of embodied, collaborative problem solving dialogues. We paid 36 people to play Portal 2’s co-op mode and collected their speech + game recordings.

Paper: arxiv.org/abs/2512.03381
Website: berkeley-nlp.github.io/portal-dialo...

1/n

05.12.2025 18:54 β€” πŸ‘ 102    πŸ” 30    πŸ’¬ 3    πŸ“Œ 8
Preview
OneStop: A 360-Participant English Eye Tracking Dataset with Different Reading Regimes - Scientific Data Scientific Data - OneStop: A 360-Participant English Eye Tracking Dataset with Different Reading Regimes

Now out in Scientific Data, OneStop: A 360-Participant English Eye Tracking Dataset with Different Reading Regimes.

www.nature.com/articles/s41...

05.12.2025 06:43 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

#NeurIPS2025 Check out EyeBench πŸ‘€, a mega-project which provides a much needed infrastructure for loading & preprocessing eye-tracking for reading datasets, and addressing super exciting modeling challenges: decoding linguistic knowledge πŸ‘© and reading interactions πŸ‘©+πŸ“– from gaze!

eyebench.github.io

02.12.2025 10:21 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

Looking forward to #NeurIPS25 this week 🏝️! I'll be presenting at Poster Session 3 (11-2 on Thursday). Feel free to reach out!

01.12.2025 22:12 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
Social Tinkering: The Social Foundations of Cultural Complexity | Behavioral and Brain Sciences | Cambridge Core Social Tinkering: The Social Foundations of Cultural Complexity

πŸ“£ Very happy to announce a new BBS target article with Nick Chater in which we propose a new theory of cultural evolution, highlighting the importance of bottom-up social interaction in explaining the emergence of cultural complexity
🧡 1/8

www.cambridge.org/core/journal...

28.11.2025 15:36 β€” πŸ‘ 34    πŸ” 15    πŸ’¬ 1    πŸ“Œ 1
GitHub - nickduran/align2-linguistic-alignment: ALIGN 2.0: Modern Python package for multi-level linguistic alignment analysis. Faster, streamlined, and feature-rich while maintaining full compatibili... ALIGN 2.0: Modern Python package for multi-level linguistic alignment analysis. Faster, streamlined, and feature-rich while maintaining full compatibility with the original ALIGN methodology (Duran...

for all of you using the ALIGN library (to measure lexical, syntactic and semantic alignment in conversations), Nick Duran has put together a great refactoring: ALIGN 2.0 (github.com/nickduran/al...), now integrated with Spacy and Bert

24.11.2025 10:17 β€” πŸ‘ 15    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0
Preview
What enables human language? A biocultural framework Explaining the origins of language is a key challenge in understanding ourselves as a species. We present an empirical framework that draws on synergies across fields to facilitate robust studies of l...

Origins of language, one of humanity’s most distinctive traits, may be best explained as a unique convergence of multiple capacities each with its own evolutionary history, involving intertwined roles of biology & culture. This framing can expand research horizons. A 🧡 on our @science.org paper.πŸ§ͺ1/n

23.11.2025 11:52 β€” πŸ‘ 206    πŸ” 86    πŸ’¬ 6    πŸ“Œ 10
Post image

A whale conversation in whale vowels. Pinchy the whale and her conversant.

The vowels are so clear that they can be transcribed with our human letters.

aye, aye!

19.11.2025 00:22 β€” πŸ‘ 28    πŸ” 6    πŸ’¬ 1    πŸ“Œ 5
A busy figure showing how reduction of different types of multimodal signals reduces over experimental rounds (with a comparison of how a non-linear signal following a power law can be transformed to a linear slope using a log-transformation).

A busy figure showing how reduction of different types of multimodal signals reduces over experimental rounds (with a comparison of how a non-linear signal following a power law can be transformed to a linear slope using a log-transformation).

Our paper @sarabogels.bsky.social covering our pre-registered multi-year research is now finally out in Cognition. We show that in conversations people reduce their multimodal signals non-linearly; the steeper this non-linear drop-off the more communicative success.

www.wimpouw.com/files/Bogels...

11.11.2025 16:49 β€” πŸ‘ 34    πŸ” 15    πŸ’¬ 3    πŸ“Œ 0
Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."

Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."

New work to appear @ TACL!

Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.

Yet they often assign higher probability to ungrammatical strings than to grammatical strings.

How can both things be true? πŸ§΅πŸ‘‡

10.11.2025 22:11 β€” πŸ‘ 90    πŸ” 20    πŸ’¬ 2    πŸ“Œ 3
Preview
Task-optimized models of sensory uncertainty reproduce human confidence judgments Sensory input is often ambiguous, leading to uncertain interpretations of the external world. Estimates of perceptual uncertainty might be useful in guiding behavior, but it remains unclear whether hu...

New pre-print from our lab, by Lakshmi Govindarajan with help from Sagarika Alavilli, introducing a new type of model for studying sensory uncertainty. www.biorxiv.org/content/10.1...
Here is a summary. (1/n)

09.11.2025 21:34 β€” πŸ‘ 28    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0

I know the students are learning a lot from your class. (Wish I could take it!)

They're lucky to have you and would be crazy not to give you tenure!

29.10.2025 18:41 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I will be recruiting PhD students via Georgetown Linguistics this application cycle! Come join us in the PICoL (pronounced β€œpickle”) lab. We focus on psycholinguistics and cognitive modeling using LLMs. See the linked flyer for more details: bit.ly/3L3vcyA

21.10.2025 21:52 β€” πŸ‘ 27    πŸ” 14    πŸ’¬ 2    πŸ“Œ 0

@ryskin is following 20 prominent accounts