Temporally-precise sensory encoding of predicted content, entraining motor oscillations to derive time. @akalt.bsky.social's first study out @currentbiology.bsky.social, testing parts of this idea (tinyurl.com/TiCSKaltenma...). Huge thanks @leverhulme.ac.uk ac.uk @erc.europa.eu, great work Aaron!
📢 PhD position in Developmental Language Modelling
(PLZ RT)
What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social
📢 PhD position in the NeuroAI of Language
Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language
What is the brain for? Active inference is widely discussed as a unifying framework for understanding brain function, yet its empirical status remains debated. Our review identifies core predictions across the action-perception cycle and evaluates their empirical support: osf.io/preprints/ps...
It’s a short read, highlighting open questions about where and how feature-specific prediction errors are computed and relayed across the visual hierarchy.
Take a look!
direct.mit.edu/imag/article...
In our article, we discuss whether and how four accounts might explain these results:
(1) hierarchical predictive coding,
(2) feedback propagation of error signals,
(3) V1 as a comparator circuit for higher-level features,
(4) dendritic HPC.
Rather than focusing only on the magnitude of surprise, studies have begun to probe the content of prediction errors, showing that even early visual responses may primarily scale with high-level, rather than low-level, visual surprise.
🧠 Feature-specific predictive processing: What’s in a prediction error? 🧠
Perspective article w/ Cem Uran, @martinavinck.bsky.social & @predictivebrain.bsky.social now in @imagingneurosci.bsky.social, highlighting recent work on the nature of surprise reflected in visual prediction errors.
🧵👇
Congratulations Peter! Amazing news and well deserved!
Thanks Juan!
If you’re interested in more details, check out the full paper:
doi.org/10.1016/j.is...
Taken together, our findings show that high-level visual predictions are rapidly integrated during perceptual inference, suggesting that the brain's predictive machinery is finely tuned to utilize expectations abstracted away from low-level sensory details to facilitate perception.
We also found a small decrease in neural responses by semantic (word-based) surprise. Notably, low-level visual surprise had no detectable effect, even though stimuli were predictable all the way down to the pixel level.
Then we turned to the key questions: When and what kind of surprise drives visually evoked responses?
Neural responses ~190 ms post-stimulus onset over parieto-occipital electrodes were selectively increased by high-level visual surprise!
As a sanity check, we first used RSA to show that the CNN and other models of interest (semantic and task models) robustly explained the EEG responses independent of surprise.
We investigated these questions using EEG and a visual CNN. Participants viewed object images that were probabilistically predicted by preceding cues. We then quantified surprise trial-by-trial at low-levels (early CNN layers) and high-levels (late CNN layers) of visual feature abstraction.
Predictive processing theories propose that the brain continuously generates predictions about incoming sensory input.
But what exactly does the brain predict? Low-level (edges, contrasts) and/or high-level visual features (textures, objects)?
And when do these predictions shape neural responses?
High-level visual surprise is rapidly integrated during perceptual inference!
🚨 New paper 🚨 out now in @cp-iscience.bsky.social with @paulapena.bsky.social and @mruz.bsky.social
www.cell.com/iscience/ful...
Summary 🧵 below 👇
🧠 Regularization, Action, and Attractors in the Dynamical “Bayesian” Brain
direct.mit.edu/jocn/article...
(still uncorrected proofs, but they should post the corrected one soon--also OA is forthcoming, for now PDF at brainandexperience.org/pdf/10.1162-...)
@dotproduct.bsky.social's first first author paper is finally out in @sfnjournals.bsky.social! Her findings show that content-specific predictions fluctuate with alpha frequencies, suggesting a more specific role for alpha oscillations than we may have thought. With @jhaarsma.bsky.social. 🧠🟦 🧠🤖
If you’re into predictive processing and curious about the ‘what & when of visual surprise’, come see me at #CCN2025 in Amsterdam!
Poster B23 · Wednesday at 1:00 pm · de Brug.
Hi, we will have three NeuroAI postdoc openings (3 years each, fully funded) to work with Sebastian Musslick (@musslick.bsky.social), Pascal Nieters and myself on task-switching, replay, and visual information routing.
Reach out if you are interested in any of the above, I'll be at CCN next week!
We are recruiting a new PI at the FIL @imagingneuroucl.bsky.social, Associate or Full Professor. This is an amazing place to do cognitive neuroscience, in the heart of London. If you or someone you know might be interested, please pass it on. #neuroskyence
www.ucl.ac.uk/work-at-ucl/...
If you are interested in pursuing a PhD in cognitive neuroscience, specially targeting conscious vs. unconscious processing, contact me. We are recruiting 🙏🧠 please RT
🚨 We’re hiring a postdoc!
Join the FLARE project @cimcyc.bsky.social to study sudden perceptual learning using fMRI, RSA, and DNNs.
🧠 2 years, fully funded, flexible start
More info 👉 gonzalezgarcia.github.io/postdoc/
DMs or emails welcome! Please share!
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
If you are interested in more details check out the preprint here:
www.biorxiv.org/content/10.1...
Taken together, our findings demonstrate that high-level visual predictions are rapidly integrated during perceptual inference. This suggests that the brain's predictive machinery is finely tuned to utilize expectations abstracted away from low-level sensory details, likely to facilitate perception.
We also found a curious decrease in ERP amplitude by semantic (word-based) surprise. Critically, we found no modulation by low-level visual surprise, even though stimuli were predictable all the way down to the pixel level.
Next, we turned to the key questions – when and what kind of surprise drive visually evoked responses? Results showed that neural responses around 200ms post-stimulus onset over parieto-occipital electrodes were selectively enhanced by high-level visual surprise.