Will Turner's Avatar

Will Turner

@renrutmailliw.bsky.social

cognitive neuroscience postdoc at stanford https://bootstrapbill.github.io/ he/him

130 Followers  |  182 Following  |  7 Posts  |  Joined: 29.01.2025  |  2.0816

Latest posts by renrutmailliw.bsky.social on Bluesky

proud to share this work, led by the brilliant @ilinabg.bsky.social, now out in Nature! Ilina finds that speech-sound neural processing is VERY similar in a language you know and one you don't. differences only emerge at the level of word boundaries and learnt statistical structure ๐Ÿง โœจ

20.11.2025 19:11 โ€” ๐Ÿ‘ 58    ๐Ÿ” 11    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Preview
FOODEEG: An open dataset of human electroencephalographic and behavioural responses to food images Investigating the neurocognitive mechanisms underlying food choices has the potential to advance our understanding of eating behaviour and inform health-targeted interventions and policy. Large, publi...

Our new preprint on the FOODEEG open dataset is out! EEG recordings and behavioural responses on food cognition tasks for 117 participants will be made publicly available ๐Ÿง  @danfeuerriegel.bsky.social @tgro.bsky.social

www.biorxiv.org/content/10.1...

10.11.2025 23:34 โ€” ๐Ÿ‘ 17    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Preview
Human cortical dynamics of auditory word form encoding We perceive continuous speech as a series of discrete words, despite the lack of clear acoustic boundaries. The superior temporal gyrus (STG) encodes โ€ฆ

happy to share our new paper, out now in Neuron! led by the incredible Yizhen Zhang, we explore how the brain segments continuous speech into word-forms and uses adaptive dynamics to code for relative time - www.sciencedirect.com/science/arti...

07.11.2025 18:16 โ€” ๐Ÿ‘ 49    ๐Ÿ” 17    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
FNR Awards 2025: Outstanding PhD Thesis - Jill Kries
YouTube video by FNRLux FNR Awards 2025: Outstanding PhD Thesis - Jill Kries

I am so honored to have received an Outstanding PhD Thesis award from the Luxembourg National Research Fund @fnr.lu! ๐Ÿ†

My PhD research was about how language is processed in the brain, with a focus on patients with a language disorder called aphasia ๐Ÿง  Find out moreโžก๏ธ youtu.be/E-Zww-B1jFQ?...

05.11.2025 17:29 โ€” ๐Ÿ‘ 12    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

Delighted to share our new paper, now out in PNAS! www.pnas.org/doi/10.1073/...

"Hierarchical dynamic coding coordinates speech comprehension in the brain"

with dream team @alecmarantz.bsky.social, @davidpoeppel.bsky.social, @jeanremiking.bsky.social

Summary ๐Ÿ‘‡

1/8

22.10.2025 05:21 โ€” ๐Ÿ‘ 92    ๐Ÿ” 35    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 5
Preview
Electroencephalographic Decoding of Conscious versus Unconscious Representations during Binocular Rivalry Abstract. Theories of visual awareness often fall into two general categories, those assuming that awareness arises rapidly within visual cortex and those assuming that awareness arises more slowly as...

Totally agree. Exact interpretation of that example hinges on having criteria for linking specific decoding results to conscious experience... and similar studies seemingly point to opposite conclusion, e.g., direct.mit.edu/jocn/article... so jury still out imo!

30.10.2025 01:44 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Fantastic commentary on @smfleming.bsky.social & @matthiasmichel.bsky.social's BBS paper by @renrutmailliw.bsky.social, @lauragwilliams.bsky.social & Hinze Hogendoorn. Hits lots of nails on the head. As @neddo.bsky.social & I also argue: postdiction doesn't prove consciousness is slow! 1/3

29.10.2025 13:22 โ€” ๐Ÿ‘ 16    ๐Ÿ” 2    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

Thanks Ian, nice to hear you liked it!

30.10.2025 01:29 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Super happy to share my very first first-author paper out in
@sfnjournals.bsky.social! We show content-specific predictions are represented in an alpha rhythm. Itโ€™s been a beautiful, inspiring, yet challenging journey.
Huge thanks to everyone, especially @peterkok.bsky.social @jhaarsma.bsky.social

21.10.2025 15:57 โ€” ๐Ÿ‘ 23    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
Contents of visual predictions oscillate at alpha frequencies Predictions of future events have a major impact on how we process sensory signals. However, it remains unclear how the brain keeps predictions online in anticipation of future inputs. Here, we combin...

@dotproduct.bsky.social's first first author paper is finally out in @sfnjournals.bsky.social! Her findings show that content-specific predictions fluctuate with alpha frequencies, suggesting a more specific role for alpha oscillations than we may have thought. With @jhaarsma.bsky.social. ๐Ÿง ๐ŸŸฆ ๐Ÿง ๐Ÿค–

21.10.2025 11:05 โ€” ๐Ÿ‘ 94    ๐Ÿ” 38    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 2
Preview
Age and gender distortion in online media and large language models - Nature Stereotypes of age-related gender bias are socially distorted, as evidenced by the age gap in the representations of women and men across various media and algorithms, despite no systematic age differ...

Age and gender distortion in online media and large language models

"Furthermore, when generating and evaluating resumes, ChatGPT assumes that women are younger and less experienced, rating older male applicants as of higher quality."

No surprise, but now documented:
www.nature.com/articles/s41...

16.10.2025 08:53 โ€” ๐Ÿ‘ 31    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4

really fun getting to think about the "time to consciousness" with this dream team! we discuss interesting parallels between vision and language processing on phenomena like postdictive perceptual effects, among other things! check it out ๐Ÿ˜„

01.10.2025 19:04 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A picture of our paper's abstract and title: The order of task decisions and confidence ratings has little effect on metacognition.

Task decisions and confidence ratings are fundamental measures in metacognition research, but using these reports requires collecting them in some order. Only three orders exist and are used in an ad hoc manner across studies. Evidence suggests that when task decisions precede confidence, this report order can enhance metacognition. If verified, this effect pervades studies of metacognition and will lead the synthesis of this literature to invalid conclusions. In this Registered Report, we tested the effect of report order across popular domains of metacognition and probed two factors that may underlie why order effects have been observed in past studies: report time and motor preparation. We examined these effects in a perception experiment (nโ€‰=โ€‰75) and memory experiment (nโ€‰=โ€‰50), controlling task accuracy and learning. Our registered analyses found little effect of report order on metacognitive efficiency, even when timing and motor preparation were experimentally controlled. Our findings suggest the order of task decisions and confidence ratings has little effect on metacognition, and need not constrain secondary analysis or experimental design.

A picture of our paper's abstract and title: The order of task decisions and confidence ratings has little effect on metacognition. Task decisions and confidence ratings are fundamental measures in metacognition research, but using these reports requires collecting them in some order. Only three orders exist and are used in an ad hoc manner across studies. Evidence suggests that when task decisions precede confidence, this report order can enhance metacognition. If verified, this effect pervades studies of metacognition and will lead the synthesis of this literature to invalid conclusions. In this Registered Report, we tested the effect of report order across popular domains of metacognition and probed two factors that may underlie why order effects have been observed in past studies: report time and motor preparation. We examined these effects in a perception experiment (nโ€‰=โ€‰75) and memory experiment (nโ€‰=โ€‰50), controlling task accuracy and learning. Our registered analyses found little effect of report order on metacognitive efficiency, even when timing and motor preparation were experimentally controlled. Our findings suggest the order of task decisions and confidence ratings has little effect on metacognition, and need not constrain secondary analysis or experimental design.

๐Ÿšจ Out now in @commspsychol.nature.com ๐Ÿšจ
doi.org/10.1038/s442...

Our #RegisteredReport tested whether the order of task decisions and confidence ratings bias #metacognition.

Some said decisions โ†’ confidence enhances metacognition. If true, decades of findings will be affected.

30.09.2025 08:10 โ€” ๐Ÿ‘ 27    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Sensory Horizons and the Functions of Conscious Vision | Behavioral and Brain Sciences | Cambridge Core Sensory Horizons and the Functions of Conscious Vision

Thanks to Steve and Matthias for writing this interesting and ambitious theoretical perspective: bit.ly/4jF4kRp.

Although we donโ€™t (yet) agree w/ one of their foundational claims, we think this perspective is valuable, and should spawn lots of important discussions and follow-up work :)

29.09.2025 19:00 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
OSF

New BBS article w/ @lauragwilliams.bsky.social and Hinze Hogendoorn, just accepted! We respond to a thought-provoking article by @smfleming.bsky.social & @matthiasmichel.bsky.social, and argue that it's premature to conclude that conscious perception is delayed by 350-450ms: bit.ly/4nYNTlb

29.09.2025 19:00 โ€” ๐Ÿ‘ 26    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Video thumbnail

We present our preprint on ViV1T, a transformer for dynamic mouse V1 response prediction. We reveal novel response properties and confirm them in vivo.

With @wulfdewolf.bsky.social, Danai Katsanevaki, @arnoonken.bsky.social, @rochefortlab.bsky.social.

Paper and code at the end of the thread!

๐Ÿงต1/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 17    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

๐ŸšจOur preprint is online!๐Ÿšจ

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! ๐Ÿงต

19.09.2025 13:05 โ€” ๐Ÿ‘ 195    ๐Ÿ” 71    ๐Ÿ’ฌ 11    ๐Ÿ“Œ 4
Preview
Research Coordinator, Minds, Experiences, and Language Lab in Graduate School of Education, Stanford, California, United States The Stanford Graduate School of Education (GSE) seeks a full-time Research Coordinator (acting lab manager) to help launch and coordinate the Minds,.....

Iโ€™m hiring!! ๐ŸŽ‰ Looking for a full-time Lab Manager to help launch the Minds, Experiences, and Language Lab at Stanford. Weโ€™ll use all-day language recording, eye tracking, & neuroimaging to study how kids & families navigate unequal structural constraints. Please share:
phxc1b.rfer.us/STANFORDWcqUYo

15.09.2025 18:57 โ€” ๐Ÿ‘ 73    ๐Ÿ” 48    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image Post image

Looking forward to #ICON2025 next week! We will have several presentations on mental imagery, reality monitoring and expectations:

To kick us off, on Tuesday at 15:30, Martha Cottam will present:

P2.12 | Presence Expectations Modulate the Neural Signatures of Content Prediction Errors

11.09.2025 15:28 โ€” ๐Ÿ‘ 23    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

In August I had the pleasure to present a poster at the Cognitive Computational Neuroscience (CCN) conference in Amsterdam. My poster was about ๐˜๐—ต๐—ฒ ๐—ฑ๐—ฒ๐˜ƒ๐—ฒ๐—น๐—ผ๐—ฝ๐—บ๐—ฒ๐—ป๐˜๐—ฎ๐—น ๐˜๐—ฟ๐—ฎ๐—ท๐—ฒ๐—ฐ๐˜๐—ผ๐—ฟ๐˜† ๐—ฎ๐—ป๐—ฑ ๐—ป๐—ฒ๐˜‚๐—ฟ๐—ผ๐—ฎ๐—ป๐—ฎ๐˜๐—ผ๐—บ๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฐ๐—ผ๐—ฟ๐—ฟ๐—ฒ๐—น๐—ฎ๐˜๐—ฒ๐˜€ ๐—ผ๐—ณ ๐˜€๐—ฝ๐—ฒ๐—ฒ๐—ฐ๐—ต ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฟ๐—ฒ๐—ต๐—ฒ๐—ป๐˜€๐—ถ๐—ผ๐—ป ๐Ÿง’โžก๏ธ๐Ÿง‘ ๐Ÿง 

08.09.2025 21:50 โ€” ๐Ÿ‘ 23    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
The Latency of a Domain-General Visual Surprise Signal is Attribute Dependent Predictions concerning upcoming visual input play a key role in resolving percepts. Sometimes input is surprising, under which circumstances the brain must calibrate erroneous predictions so that perc...

๐ŸšจPre-print of some cool data from my PhD days!
doi.org/10.1101/2025...

โ˜๏ธDid you know that visual surprise is (probably) a domain-general signal and/or operates at the object-level?
โœŒ๏ธDid you also know that the timing of this response depends on the specific attribute that violates an expectation?

19.08.2025 00:30 โ€” ๐Ÿ‘ 15    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Video thumbnail

Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!

19.08.2025 01:12 โ€” ๐Ÿ‘ 52    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

looking forward to seeing everyone at #CCN2025! here's a snapshot of the work from my lab that we'll be presenting on speech neuroscience ๐Ÿง  โœจ

10.08.2025 18:09 โ€” ๐Ÿ‘ 53    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Screenshot of the article "How Convincing Is a Crowd? Quantifying the Persuasiveness of a Consensus for Different Individuals and Types of Claims"

Screenshot of the article "How Convincing Is a Crowd? Quantifying the Persuasiveness of a Consensus for Different Individuals and Types of Claims"

We know that a consensus of opinions is persuasive, but how reliable is this effect across people and types of consensus, and are there any kinds of claims where people care less about what other people think? This is what we tested in our new(ish) paper in @psychscience.bsky.social

10.08.2025 23:11 โ€” ๐Ÿ‘ 65    ๐Ÿ” 32    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 2

I really like this paper. I fear that people think the authors are claiming that the brain isnโ€™t predictive though, which this study cannot (and does not) address. As the title says, the data purely show that evoked responses are not necessarily prediction errors, which makes sense!

15.07.2025 11:43 โ€” ๐Ÿ‘ 17    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

It takes time for the #brain to process information, so how can we catch a flying ball? @renrutmailliw.bsky.social &co reveal a multi-stage #motion #extrapolation occurring in the #HumanBrain, shifting the represented position of moving objects closer to real time @plosbiology.org ๐Ÿงช plos.io/3Fm83Fc

27.05.2025 18:06 โ€” ๐Ÿ‘ 18    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

It takes time for the #brain to process information, so how can we catch a flying ball? This study provides evidence of multi-stage #motion #extrapolation occurring in the #HumanBrain, shifting the represented position of moving objects closer to real time @plosbiology.org ๐Ÿงช plos.io/3Fm83Fc

27.05.2025 13:17 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Characterising the neural time-courses of food attribute representations Dietary decisions involve the consideration of multiple, often conflicting, food attributes that precede the computation of an overall value for a food. The differences in the speed at which attribute...

New preprint from the lab!

We used EEGโšก๐Ÿง  to map how 12 different food attributes are represented in the brain. ๐ŸŽ๐Ÿฅฆ๐Ÿฅช๐Ÿ™๐Ÿฎ

www.biorxiv.org/content/10.1...

Led by Violet Chae in collaboration with @tgro.bsky.social

18.05.2025 01:46 โ€” ๐Ÿ‘ 21    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks Henry! All kudos really go to Charlie for the modelling! Hope all is well in Brissy :)

23.05.2025 21:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractnessโ€”revealing an interpretable, topographic representational basis for language processing shared across individuals

23.05.2025 16:59 โ€” ๐Ÿ‘ 71    ๐Ÿ” 30    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

@renrutmailliw is following 20 prominent accounts