Will Turner's Avatar

Will Turner

@renrutmailliw.bsky.social

cognitive neuroscience postdoc at stanford https://bootstrapbill.github.io/ he/him

107 Followers  |  160 Following  |  5 Posts  |  Joined: 29.01.2025  |  2.018

Latest posts by renrutmailliw.bsky.social on Bluesky

really fun getting to think about the "time to consciousness" with this dream team! we discuss interesting parallels between vision and language processing on phenomena like postdictive perceptual effects, among other things! check it out ๐Ÿ˜„

01.10.2025 19:04 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A picture of our paper's abstract and title: The order of task decisions and confidence ratings has little effect on metacognition.

Task decisions and confidence ratings are fundamental measures in metacognition research, but using these reports requires collecting them in some order. Only three orders exist and are used in an ad hoc manner across studies. Evidence suggests that when task decisions precede confidence, this report order can enhance metacognition. If verified, this effect pervades studies of metacognition and will lead the synthesis of this literature to invalid conclusions. In this Registered Report, we tested the effect of report order across popular domains of metacognition and probed two factors that may underlie why order effects have been observed in past studies: report time and motor preparation. We examined these effects in a perception experiment (nโ€‰=โ€‰75) and memory experiment (nโ€‰=โ€‰50), controlling task accuracy and learning. Our registered analyses found little effect of report order on metacognitive efficiency, even when timing and motor preparation were experimentally controlled. Our findings suggest the order of task decisions and confidence ratings has little effect on metacognition, and need not constrain secondary analysis or experimental design.

A picture of our paper's abstract and title: The order of task decisions and confidence ratings has little effect on metacognition. Task decisions and confidence ratings are fundamental measures in metacognition research, but using these reports requires collecting them in some order. Only three orders exist and are used in an ad hoc manner across studies. Evidence suggests that when task decisions precede confidence, this report order can enhance metacognition. If verified, this effect pervades studies of metacognition and will lead the synthesis of this literature to invalid conclusions. In this Registered Report, we tested the effect of report order across popular domains of metacognition and probed two factors that may underlie why order effects have been observed in past studies: report time and motor preparation. We examined these effects in a perception experiment (nโ€‰=โ€‰75) and memory experiment (nโ€‰=โ€‰50), controlling task accuracy and learning. Our registered analyses found little effect of report order on metacognitive efficiency, even when timing and motor preparation were experimentally controlled. Our findings suggest the order of task decisions and confidence ratings has little effect on metacognition, and need not constrain secondary analysis or experimental design.

๐Ÿšจ Out now in @commspsychol.nature.com ๐Ÿšจ
doi.org/10.1038/s442...

Our #RegisteredReport tested whether the order of task decisions and confidence ratings bias #metacognition.

Some said decisions โ†’ confidence enhances metacognition. If true, decades of findings will be affected.

30.09.2025 08:10 โ€” ๐Ÿ‘ 25    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Sensory Horizons and the Functions of Conscious Vision | Behavioral and Brain Sciences | Cambridge Core Sensory Horizons and the Functions of Conscious Vision

Thanks to Steve and Matthias for writing this interesting and ambitious theoretical perspective: bit.ly/4jF4kRp.

Although we donโ€™t (yet) agree w/ one of their foundational claims, we think this perspective is valuable, and should spawn lots of important discussions and follow-up work :)

29.09.2025 19:00 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
OSF

New BBS article w/ @lauragwilliams.bsky.social and Hinze Hogendoorn, just accepted! We respond to a thought-provoking article by @smfleming.bsky.social & @matthiasmichel.bsky.social, and argue that it's premature to conclude that conscious perception is delayed by 350-450ms: bit.ly/4nYNTlb

29.09.2025 19:00 โ€” ๐Ÿ‘ 22    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Video thumbnail

We present our preprint on ViV1T, a transformer for dynamic mouse V1 response prediction. We reveal novel response properties and confirm them in vivo.

With @wulfdewolf.bsky.social, Danai Katsanevaki, @arnoonken.bsky.social, @rochefortlab.bsky.social.

Paper and code at the end of the thread!

๐Ÿงต1/7

19.09.2025 12:37 โ€” ๐Ÿ‘ 17    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

๐ŸšจOur preprint is online!๐Ÿšจ

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! ๐Ÿงต

19.09.2025 13:05 โ€” ๐Ÿ‘ 190    ๐Ÿ” 67    ๐Ÿ’ฌ 10    ๐Ÿ“Œ 3
Preview
Research Coordinator, Minds, Experiences, and Language Lab in Graduate School of Education, Stanford, California, United States The Stanford Graduate School of Education (GSE) seeks a full-time Research Coordinator (acting lab manager) to help launch and coordinate the Minds,.....

Iโ€™m hiring!! ๐ŸŽ‰ Looking for a full-time Lab Manager to help launch the Minds, Experiences, and Language Lab at Stanford. Weโ€™ll use all-day language recording, eye tracking, & neuroimaging to study how kids & families navigate unequal structural constraints. Please share:
phxc1b.rfer.us/STANFORDWcqUYo

15.09.2025 18:57 โ€” ๐Ÿ‘ 71    ๐Ÿ” 48    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image Post image

Looking forward to #ICON2025 next week! We will have several presentations on mental imagery, reality monitoring and expectations:

To kick us off, on Tuesday at 15:30, Martha Cottam will present:

P2.12 | Presence Expectations Modulate the Neural Signatures of Content Prediction Errors

11.09.2025 15:28 โ€” ๐Ÿ‘ 23    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

In August I had the pleasure to present a poster at the Cognitive Computational Neuroscience (CCN) conference in Amsterdam. My poster was about ๐˜๐—ต๐—ฒ ๐—ฑ๐—ฒ๐˜ƒ๐—ฒ๐—น๐—ผ๐—ฝ๐—บ๐—ฒ๐—ป๐˜๐—ฎ๐—น ๐˜๐—ฟ๐—ฎ๐—ท๐—ฒ๐—ฐ๐˜๐—ผ๐—ฟ๐˜† ๐—ฎ๐—ป๐—ฑ ๐—ป๐—ฒ๐˜‚๐—ฟ๐—ผ๐—ฎ๐—ป๐—ฎ๐˜๐—ผ๐—บ๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฐ๐—ผ๐—ฟ๐—ฟ๐—ฒ๐—น๐—ฎ๐˜๐—ฒ๐˜€ ๐—ผ๐—ณ ๐˜€๐—ฝ๐—ฒ๐—ฒ๐—ฐ๐—ต ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฟ๐—ฒ๐—ต๐—ฒ๐—ป๐˜€๐—ถ๐—ผ๐—ป ๐Ÿง’โžก๏ธ๐Ÿง‘ ๐Ÿง 

08.09.2025 21:50 โ€” ๐Ÿ‘ 23    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
The Latency of a Domain-General Visual Surprise Signal is Attribute Dependent Predictions concerning upcoming visual input play a key role in resolving percepts. Sometimes input is surprising, under which circumstances the brain must calibrate erroneous predictions so that perc...

๐ŸšจPre-print of some cool data from my PhD days!
doi.org/10.1101/2025...

โ˜๏ธDid you know that visual surprise is (probably) a domain-general signal and/or operates at the object-level?
โœŒ๏ธDid you also know that the timing of this response depends on the specific attribute that violates an expectation?

19.08.2025 00:30 โ€” ๐Ÿ‘ 15    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Video thumbnail

Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!

19.08.2025 01:12 โ€” ๐Ÿ‘ 51    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

looking forward to seeing everyone at #CCN2025! here's a snapshot of the work from my lab that we'll be presenting on speech neuroscience ๐Ÿง  โœจ

10.08.2025 18:09 โ€” ๐Ÿ‘ 53    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Screenshot of the article "How Convincing Is a Crowd? Quantifying the Persuasiveness of a Consensus for Different Individuals and Types of Claims"

Screenshot of the article "How Convincing Is a Crowd? Quantifying the Persuasiveness of a Consensus for Different Individuals and Types of Claims"

We know that a consensus of opinions is persuasive, but how reliable is this effect across people and types of consensus, and are there any kinds of claims where people care less about what other people think? This is what we tested in our new(ish) paper in @psychscience.bsky.social

10.08.2025 23:11 โ€” ๐Ÿ‘ 64    ๐Ÿ” 32    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 2

I really like this paper. I fear that people think the authors are claiming that the brain isnโ€™t predictive though, which this study cannot (and does not) address. As the title says, the data purely show that evoked responses are not necessarily prediction errors, which makes sense!

15.07.2025 11:43 โ€” ๐Ÿ‘ 17    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

It takes time for the #brain to process information, so how can we catch a flying ball? @renrutmailliw.bsky.social &co reveal a multi-stage #motion #extrapolation occurring in the #HumanBrain, shifting the represented position of moving objects closer to real time @plosbiology.org ๐Ÿงช plos.io/3Fm83Fc

27.05.2025 18:06 โ€” ๐Ÿ‘ 18    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

Mapping the position of moving stimuli. The top three panels show the three events of interest: stimulus onset, stimulus offset, and stimulus reversal (left to right). The bottom three panels show group-level probabilistic spatio-temporal maps centered around these three events. Diagonal black lines mark the true position of the stimulus. Horizontal dashed lines mark the time of the event of interest (stimulus onset, offset, or reversal). Red indicates high probability regions and blue indicates low probability regions (โ€˜position evidenceโ€™ gives the difference between the posterior probability and chance). Note: these maps were generated from recordings at posterior/occipital sites.

It takes time for the #brain to process information, so how can we catch a flying ball? This study provides evidence of multi-stage #motion #extrapolation occurring in the #HumanBrain, shifting the represented position of moving objects closer to real time @plosbiology.org ๐Ÿงช plos.io/3Fm83Fc

27.05.2025 13:17 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Characterising the neural time-courses of food attribute representations Dietary decisions involve the consideration of multiple, often conflicting, food attributes that precede the computation of an overall value for a food. The differences in the speed at which attribute...

New preprint from the lab!

We used EEGโšก๐Ÿง  to map how 12 different food attributes are represented in the brain. ๐ŸŽ๐Ÿฅฆ๐Ÿฅช๐Ÿ™๐Ÿฎ

www.biorxiv.org/content/10.1...

Led by Violet Chae in collaboration with @tgro.bsky.social

18.05.2025 01:46 โ€” ๐Ÿ‘ 20    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks Henry! All kudos really go to Charlie for the modelling! Hope all is well in Brissy :)

23.05.2025 21:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractnessโ€”revealing an interpretable, topographic representational basis for language processing shared across individuals

23.05.2025 16:59 โ€” ๐Ÿ‘ 71    ๐Ÿ” 30    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Preview
GitHub - bootstrapbill/neural-location-decoding: Contains scripts for decoding location of a moving object from EEG data. Preprint: https://www.biorxiv.org/content/10.1101/2024.04.22.590502v2 Contains scripts for decoding location of a moving object from EEG data. Preprint: https://www.biorxiv.org/content/10.1101/2024.04.22.590502v2 - bootstrapbill/neural-location-decoding

Code + data can be found here:
github.com/bootstrapbil...
osf.io/sn4a7/

23.05.2025 20:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

New paper out in @plosbiology.org w/ Charlie, @phil-johnson.bsky.social, Ella, and Hinze ๐ŸŽ‰

We track moving stimuli via EEG, find evidence that motion is extrapolated across distinct stages of processing + show how this effect may emerge from a simple synaptic learning rule!

tinyurl.com/2szh6w5c

23.05.2025 20:34 โ€” ๐Ÿ‘ 24    ๐Ÿ” 10    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 0
Preview
UCL โ€“ University College London UCL is consistently ranked as one of the top ten universities in the world (QS World University Rankings 2010-2022) and is No.2 in the UK for research power (Research Excellence Framework 2021).

๐Ÿ“ฃcog neuro postdoc opportunity! Interested in studying attention & exploration w/ cutting edge M/EEG? ๐Ÿง care about making vision science a bit more naturalistic? ๐ŸŒฑLandauLab is hiring! We seek resourceful, curious and creative researchers who can join the newly forming London-based team@ucl.ac.uk! ...

15.05.2025 16:07 โ€” ๐Ÿ‘ 23    ๐Ÿ” 17    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3
Video thumbnail

I'm very pleased to share our latest study:
โ€˜Emergence of Language in the Developing Brainโ€™,
by L Evanson, P Bourdillon et al:
- Paper: ai.meta.com/research/pub...
- Blog: ai.meta.com/blog/meta-fa...
- Thread below ๐Ÿ‘‡

15.05.2025 16:00 โ€” ๐Ÿ‘ 76    ๐Ÿ” 22    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 3
Sensory Horizons and the Functions of Conscious Vision | Behavioral and Brain Sciences | Cambridge Core Sensory Horizons and the Functions of Conscious Vision

Very happy to announce that our paper โ€œSensory Horizons and the Functions of Conscious Visionโ€ is now out as a target article in BBS!! @smfleming.bsky.social and I present a new theory of the evolution and functions of visual consciousness. Article here: doi.org/10.1017/S014.... A (long) thread ๐Ÿงต

21.04.2025 15:27 โ€” ๐Ÿ‘ 177    ๐Ÿ” 69    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 8
Stimulus dependenciesโ€”rather than next-word predictionโ€”can explain pre-onset brain encoding during natural listening

In @elife.bsky.social: Stimulus dependenciesโ€”rather than next-word predictionโ€”can explain pre-onset brain encoding during natural listening doi.org/10.7554/eLif...

18.04.2025 07:19 โ€” ๐Ÿ‘ 35    ๐Ÿ” 15    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Check out the latest -From Our Neurons To Yours- podcast episode! I discuss my work using large speech and language systems as "model species" to understand how the human brain processes language ๐Ÿง 

17.04.2025 23:23 โ€” ๐Ÿ‘ 21    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Very happy that this primer on software engineering principles for psychology and cognitive neuroscience is now out! This was a great joint project between my group (lead by postdoc Yunyan Duan) and @martinhebart.bsky.social's group (led by @rothj.bsky.social).

#psychology #cogsci #neuroskyence

15.04.2025 13:26 โ€” ๐Ÿ‘ 15    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
APA PsycNet

PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN

Some of these words are consistently remembered better than others. Why is that?
In our paper, just published in J. Exp. Psychol., we provide a simple Bayesian account and show that it explains >80% of variance in word memorability: tinyurl.com/yf3md5aj

10.04.2025 14:38 โ€” ๐Ÿ‘ 40    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
GitHub - coryshain/parcellate: A library for parcellating individual brains based on functional correlation A library for parcellating individual brains based on functional correlation - coryshain/parcellate

Paper: doi.org/10.1101/2025...
Parcellate codebase: github.com/coryshain/pa...
Study-specific codebase: github.com/coryshain/la...
Data repository (under construction): openneuro.org/datasets/ds0...

31.03.2025 15:19 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
A language network in the individualized functional connectomes of over 1,000 human brains doing arbitrary tasks A century and a half of neuroscience has yielded many divergent theories of the neurobiology of language. Two factors that likely contribute to this situation include (a) conceptual disagreementโ€ฆ

New brain/language study w/ @evfedorenko.bsky.social! We applied task-agnostic individualized functional connectomics (iFC) to the entire history of fMRI scanning in the Fedorenko lab, parcellating nearly 1200 brains into networks based on activity fluctuations alone. doi.org/10.1101/2025... . ๐Ÿงต

31.03.2025 15:19 โ€” ๐Ÿ‘ 43    ๐Ÿ” 13    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

@renrutmailliw is following 20 prominent accounts