Fabian Schneider's Avatar

Fabian Schneider

@fabianschneider.bsky.social

Doctoral researcher. Interested in memory, audition, semantics, predictive coding, spiking networks.

103 Followers  |  128 Following  |  18 Posts  |  Joined: 09.11.2023  |  1.995

Latest posts by fabianschneider.bsky.social on Bluesky

Post image

Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan πŸ§ πŸ’¬

1/n

03.11.2025 15:17 β€” πŸ‘ 128    πŸ” 52    πŸ’¬ 4    πŸ“Œ 8
IMAGINE-decoding-challenge Predict which words participants were hearing, based upon brain activity recordings of visually seeing these items?

How well do classifiers trained on visual activity actually transfer to non-visual reactivation?

#Decoding studies often rely on training in one (visual) condition and applying it to another (e.g. rest-reactivation). However: How well does this work? Show us what makes it work and win up to 1000$!

24.10.2025 06:55 β€” πŸ‘ 32    πŸ” 14    πŸ’¬ 3    πŸ“Œ 3
Preview
Regularization, Action, and Attractors in the Dynamical β€œBayesian” Brain Abstract. The idea that the brain is a probabilistic (Bayesian) inference machine, continuously trying to figure out the hidden causes of its inputs, has become very influential in cognitive (neuro)sc...

🧠 Regularization, Action, and Attractors in the Dynamical β€œBayesian” Brain

direct.mit.edu/jocn/article...

(still uncorrected proofs, but they should post the corrected one soon--also OA is forthcoming, for now PDF at brainandexperience.org/pdf/10.1162-...)

22.10.2025 08:59 β€” πŸ‘ 27    πŸ” 12    πŸ’¬ 2    πŸ“Œ 3
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

In neuroscience, we often try to understand systems by analyzing their representations β€” using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:

05.08.2025 14:36 β€” πŸ‘ 164    πŸ” 53    πŸ’¬ 5    πŸ“Œ 0

Cool! May I join?

01.08.2025 15:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks Peter!! :-)

For anyone looking for a brief summary, here's a quick tour of our key findings: bsky.app/profile/fabi...

01.08.2025 11:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Sensory sharpening and semantic prediction errors unify competing models of predictive processing in communication The human brain makes abundant predictions in speech comprehension that, in real-world conversations, depend on conversational partners. Yet, models diverge on how such predictions are integrated with...

🧡16/16
More results, details and discussion in the full preprint: www.biorxiv.org/content/10.1...

Huge thanks to Helen Blank, the Predictive Cognition Lab, and colleagues @isnlab.bsky.social.

Happy to discuss here, via email or in person! Make sure to catch us at CCN if you're around. πŸ₯³

01.08.2025 11:24 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

🧡15/16
3. Prediction errors are not computed indiscriminately and appear to be gated by likelihood, potentially underlying robust updates to world models (where extreme prediction errors might otherwise lead to deleterious model updates).

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Redirecting

🧡14/16
2. Priors sharpen representations at the sensory level, and produce high-level prediction errors.

While this contradicts traditional predictive coding, it aligns well with recent views by @clarepress.bsky.social, @peterkok.bsky.social, @danieljamesyon.bsky.social: doi.org/10.1016/j.ti...

01.08.2025 11:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧡13/16
So what are the key takeaways?

1. Listeners apply speaker-specific semantic priors in speech comprehension.

This extends previous findings showing speaker-specific adaptations at the phonetic, phonemic and lexical levels.

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡12/16
In fact, neurally we find a double dissociation between type of prior and congruency: Semantic prediction errors are apparent relative to speaker-invariant priors IFF word is highly unlikely given speaker prior, but emerge relative to speaker-specific priors otherwise!

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡11/16
Interestingly, participants take longer to respond to words incongruent with the speaker, but response times are a function of word probability given the speaker only for congruent words. This may also suggest some kind of gating, incurring a switch cost!

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡10/16
So is there some process gating which semantic prediction errors are computed?

In real time, we sample particularly congruent and incongruent exemplars of a speaker for each subject. We present unmorphed but degraded words and ask for word identification.

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡9/16
Conversely, here we find that only speaker-specific semantic surprisal improves encoding performance. Explained variance clusters across all sensors between 150-630ms, consistent with prediction errors at higher levels of the processing hierarchy such as semantics!

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡8/16
What about high-level representations? Let's zoom out to the broadband EEG response.

To test for information theoretic measures, we encode single-trial responses from acoustic/semantic surprisal, controlling for general linguistic confounds (in part through LLMs).

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡7/16
How are they altered? Our RSMs naturally represent expected information. Due to their geometry, a sign flip inverts the pattern to represent unexpected information.

Coefficients show clear evidence of sharpening at the sensory level, pulling reps. towards predictions!

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡6/16
We find that similarity structure of sensory representations is best explained by combining speaker-invariant and -specific acoustic predictions. Critically, purely semantic predictions do not help.

Semantic predictions alter sensory representations at the acoustic level!

01.08.2025 11:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡5/16
We compute similarity between reconstructions for both speakers and original words from morph creation. We encode observed sensory RSMs from speaker-invariant and -specific acoustic and semantic predictions, controlling for raw acoustics and general linguistic predictions.

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡4/16
Let's zoom in on the sensory level: We train stimulus reconstruction models to decode auditory spectrograms from EEG recordings.

If predictions shape neural representations at the sensory level, we should find reconstructed representational content shifted by speakers.

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡3/16
Indeed, participants report hearing words as a function of semantic probability given the speaker, scaling with exposure.

But how? Predictive coding invokes prediction errors, but Bayesian inference requires sharpening. Does the brain represent un-/expected information?

01.08.2025 11:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧡2/16
We played morphed audio files (e.g., sea/tea) and had participants report which of the two words they had heard. Critically, the same morphs were played in different speaker contexts, with speaker-specific feedback reinforcing robust speaker-specific semantic expectations.

01.08.2025 11:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨 Fresh preprint w/ @helenblank.bsky.social!

How does the brain acquire expectations about a conversational partner, and how are priors integrated w/ sensory inputs?

Current evidence diverges. Is it prediction error? Sharpening?

Spoiler: It's both.πŸ‘€

🧡1/16

www.biorxiv.org/content/10.1...

01.08.2025 11:24 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 2    πŸ“Œ 2

It's been a while since our last laminar MEG paper, but we're back! This time we push beyond deep versus superficial distinctions and go whole hog. Check it out- lots more exciting stuff to come! πŸ§ πŸ“ˆ

02.06.2025 12:31 β€” πŸ‘ 22    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0

@fabianschneider is following 20 prominent accounts