Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan π§ π¬
1/n
@fabianschneider.bsky.social
Doctoral researcher. Interested in memory, audition, semantics, predictive coding, spiking networks.
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan π§ π¬
1/n
How well do classifiers trained on visual activity actually transfer to non-visual reactivation?
#Decoding studies often rely on training in one (visual) condition and applying it to another (e.g. rest-reactivation). However: How well does this work? Show us what makes it work and win up to 1000$!
π§ Regularization, Action, and Attractors in the Dynamical βBayesianβ Brain
direct.mit.edu/jocn/article...
(still uncorrected proofs, but they should post the corrected one soon--also OA is forthcoming, for now PDF at brainandexperience.org/pdf/10.1162-...)
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices
In neuroscience, we often try to understand systems by analyzing their representations β using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
05.08.2025 14:36 β π 164 π 53 π¬ 5 π 0Cool! May I join?
01.08.2025 15:17 β π 1 π 0 π¬ 0 π 0Thanks Peter!! :-)
For anyone looking for a brief summary, here's a quick tour of our key findings: bsky.app/profile/fabi...
π§΅16/16
More results, details and discussion in the full preprint: www.biorxiv.org/content/10.1...
Huge thanks to Helen Blank, the Predictive Cognition Lab, and colleagues @isnlab.bsky.social.
Happy to discuss here, via email or in person! Make sure to catch us at CCN if you're around. π₯³
π§΅15/16
3. Prediction errors are not computed indiscriminately and appear to be gated by likelihood, potentially underlying robust updates to world models (where extreme prediction errors might otherwise lead to deleterious model updates).
π§΅14/16
2. Priors sharpen representations at the sensory level, and produce high-level prediction errors.
While this contradicts traditional predictive coding, it aligns well with recent views by @clarepress.bsky.social, @peterkok.bsky.social, @danieljamesyon.bsky.social: doi.org/10.1016/j.ti...
π§΅13/16
So what are the key takeaways?
1. Listeners apply speaker-specific semantic priors in speech comprehension.
This extends previous findings showing speaker-specific adaptations at the phonetic, phonemic and lexical levels.
π§΅12/16
In fact, neurally we find a double dissociation between type of prior and congruency: Semantic prediction errors are apparent relative to speaker-invariant priors IFF word is highly unlikely given speaker prior, but emerge relative to speaker-specific priors otherwise!
π§΅11/16
Interestingly, participants take longer to respond to words incongruent with the speaker, but response times are a function of word probability given the speaker only for congruent words. This may also suggest some kind of gating, incurring a switch cost!
π§΅10/16
So is there some process gating which semantic prediction errors are computed?
In real time, we sample particularly congruent and incongruent exemplars of a speaker for each subject. We present unmorphed but degraded words and ask for word identification.
π§΅9/16
Conversely, here we find that only speaker-specific semantic surprisal improves encoding performance. Explained variance clusters across all sensors between 150-630ms, consistent with prediction errors at higher levels of the processing hierarchy such as semantics!
π§΅8/16
What about high-level representations? Let's zoom out to the broadband EEG response.
To test for information theoretic measures, we encode single-trial responses from acoustic/semantic surprisal, controlling for general linguistic confounds (in part through LLMs).
π§΅7/16
How are they altered? Our RSMs naturally represent expected information. Due to their geometry, a sign flip inverts the pattern to represent unexpected information.
Coefficients show clear evidence of sharpening at the sensory level, pulling reps. towards predictions!
π§΅6/16
We find that similarity structure of sensory representations is best explained by combining speaker-invariant and -specific acoustic predictions. Critically, purely semantic predictions do not help.
Semantic predictions alter sensory representations at the acoustic level!
π§΅5/16
We compute similarity between reconstructions for both speakers and original words from morph creation. We encode observed sensory RSMs from speaker-invariant and -specific acoustic and semantic predictions, controlling for raw acoustics and general linguistic predictions.
π§΅4/16
Let's zoom in on the sensory level: We train stimulus reconstruction models to decode auditory spectrograms from EEG recordings.
If predictions shape neural representations at the sensory level, we should find reconstructed representational content shifted by speakers.
π§΅3/16
Indeed, participants report hearing words as a function of semantic probability given the speaker, scaling with exposure.
But how? Predictive coding invokes prediction errors, but Bayesian inference requires sharpening. Does the brain represent un-/expected information?
π§΅2/16
We played morphed audio files (e.g., sea/tea) and had participants report which of the two words they had heard. Critically, the same morphs were played in different speaker contexts, with speaker-specific feedback reinforcing robust speaker-specific semantic expectations.
π¨ Fresh preprint w/ @helenblank.bsky.social!
How does the brain acquire expectations about a conversational partner, and how are priors integrated w/ sensory inputs?
Current evidence diverges. Is it prediction error? Sharpening?
Spoiler: It's both.π
π§΅1/16
www.biorxiv.org/content/10.1...
It's been a while since our last laminar MEG paper, but we're back! This time we push beyond deep versus superficial distinctions and go whole hog. Check it out- lots more exciting stuff to come! π§ π
02.06.2025 12:31 β π 22 π 9 π¬ 1 π 0