Do we need a Nature paper for that?
Language models cannot reliably distinguish belief from knowledge and fact
www.nature.com/articles/s42...
@davogelsang.bsky.social
Lecturer in Brain & Cognition at the University of Amsterdam
Do we need a Nature paper for that?
Language models cannot reliably distinguish belief from knowledge and fact
www.nature.com/articles/s42...
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan π§ π¬
1/n
The brilliant @kasiamojescik.bsky.social and Martha McGill join @claudiahammond.bsky.social and @catherineloveday.bsky.social on BBC Radio 4 All in the Mind this morning to launch our new public survey of vivid memories. You can take part here: cambridge.eu.qualtrics.com/jfe/form/SV_...
21.10.2025 08:24 β π 14 π 11 π¬ 0 π 1Why do brain networks vary? Do these differences shape behavior? If every π§ is unique, how can we detect common features of brain organization?
@rodbraga.bsky.social and I dig in, in @annualreviews.bsky.social (ahead of print):
go.illinois.edu/Gratton2025-...
#neuroskyence #psychscisky #MedSky
π§΅π
Vacancy alert! Prof. Ineke van der Ham and I are looking for a post-doc (from early 2026 on). For this project we aim to investigate why some individuals often get lost: a condition recently coined atopia. The project involves behavioral, eye tracking and potentially fMRI. See dewegkwijt.com
16.10.2025 09:38 β π 1 π 1 π¬ 0 π 0Elke schooldag staat leren centraal. Gek eigenlijk dat er geen vakken zijn die je leren hoe je dit het beste aanpakt. In een nieuwe Klokhuis-aflevering geven Erik Scherder (vanaf een sportveld) en ik (vanuit onze Sylvius VR labs) tips om je te leren leren: hetklokhuis.nl/tv-uitzendin...
16.10.2025 07:10 β π 0 π 1 π¬ 0 π 0For all the knucklehead reviewers out there.
Principles for proper peer review - Earl K. Miller
jocnf.pubpub.org/pub/qag76ip8...
#neuroscience
In our Trends in Cogn Sci paper we point to the connectivity crisis in task-based human EEG/MEG research: many connectivity metrics, too little replication. Time for community-wide benchmarking to build robust, generalisable measures across labs & tasks. www.sciencedirect.com/science/arti...
18.09.2025 15:23 β π 87 π 28 π¬ 2 π 0Thank you; and that is an interesting question. My prediction is that it may not work so well (would be fun to test)
18.09.2025 15:56 β π 1 π 0 π¬ 0 π 0Thank you for your reply. Unfortunately, we did not examine within-category effects, but that would certainly be interesting to do
18.09.2025 15:51 β π 0 π 0 π¬ 0 π 0Our takeaway:
Memory has a geometry.
The magnitude of representations predicts memorability across vision and language, providing a new lens for understanding why some stimuli are memorable.
Think of memory as geometry:
An itemβs vector length in representational space predicts how likely it is to stick in your mind β at least for images and words.
So what did we learn?
β
Robust effect for images
β
Robust effect for words
β No effect for voices
β Memorability seems tied to how strongly items project onto meaningful representational dimensions, not all sensory domains.
Then we asked: does this principle also apply to voices?
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
π They didnβt. No consistent link between L2 norm and voice memorability.
And crucially:
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
Then we asked: is this just a visual trick, or is it present in other domains as well?
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
In CNNs, the effect is strongest in later layers, where abstract, conceptual features are represented.
π Larger representational magnitude β higher memorability.
We first wanted to examine whether we could replicate this L2 norm effect as reported by Jaegle et al. (2019).
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
Why do we remember some things better than others?
Memory varies across people, but some items are intrinsically more memorable.
Jeagle et al. (2019) showed that a simple geometric property of representations β the L2 norm (vector magnitude) β positively correlates with image memorability
New preprint out together with @mheilbron.bsky.social
We find that a stimulus' representational magnitudeβthe L2 norm of its DNN representationβpredicts intrinsic memorability not just for images, but for words too.
www.biorxiv.org/content/10.1...
Interested in hippocampal dynamics and their interactions with cortical rhythms?
Our physically constrained model of cortico-hippocampal interactions - complete with fast geometrically informed numerical simulation (available at embedded github repo)
www.biorxiv.org/content/10.1...