For all the knucklehead reviewers out there.
Principles for proper peer review - Earl K. Miller
jocnf.pubpub.org/pub/qag76ip8...
#neuroscience
@davogelsang.bsky.social
Lecturer in Brain & Cognition at the University of Amsterdam
For all the knucklehead reviewers out there.
Principles for proper peer review - Earl K. Miller
jocnf.pubpub.org/pub/qag76ip8...
#neuroscience
In our Trends in Cogn Sci paper we point to the connectivity crisis in task-based human EEG/MEG research: many connectivity metrics, too little replication. Time for community-wide benchmarking to build robust, generalisable measures across labs & tasks. www.sciencedirect.com/science/arti...
18.09.2025 15:23 β π 87 π 28 π¬ 2 π 0Thank you; and that is an interesting question. My prediction is that it may not work so well (would be fun to test)
18.09.2025 15:56 β π 1 π 0 π¬ 0 π 0Thank you for your reply. Unfortunately, we did not examine within-category effects, but that would certainly be interesting to do
18.09.2025 15:51 β π 0 π 0 π¬ 0 π 0Our takeaway:
Memory has a geometry.
The magnitude of representations predicts memorability across vision and language, providing a new lens for understanding why some stimuli are memorable.
Think of memory as geometry:
An itemβs vector length in representational space predicts how likely it is to stick in your mind β at least for images and words.
So what did we learn?
β
Robust effect for images
β
Robust effect for words
β No effect for voices
β Memorability seems tied to how strongly items project onto meaningful representational dimensions, not all sensory domains.
Then we asked: does this principle also apply to voices?
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
π They didnβt. No consistent link between L2 norm and voice memorability.
And crucially:
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
Then we asked: is this just a visual trick, or is it present in other domains as well?
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
In CNNs, the effect is strongest in later layers, where abstract, conceptual features are represented.
π Larger representational magnitude β higher memorability.
We first wanted to examine whether we could replicate this L2 norm effect as reported by Jaegle et al. (2019).
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
Why do we remember some things better than others?
Memory varies across people, but some items are intrinsically more memorable.
Jeagle et al. (2019) showed that a simple geometric property of representations β the L2 norm (vector magnitude) β positively correlates with image memorability
New preprint out together with @mheilbron.bsky.social
We find that a stimulus' representational magnitudeβthe L2 norm of its DNN representationβpredicts intrinsic memorability not just for images, but for words too.
www.biorxiv.org/content/10.1...
Interested in hippocampal dynamics and their interactions with cortical rhythms?
Our physically constrained model of cortico-hippocampal interactions - complete with fast geometrically informed numerical simulation (available at embedded github repo)
www.biorxiv.org/content/10.1...