micha heilbron's Avatar

micha heilbron

@mheilbron.bsky.social

Assistant Professor of Cognitive AI @UvA Amsterdam language and vision in brains & machines cognitive science ๐Ÿค AI ๐Ÿค cognitive neuroscience michaheilbron.github.io

793 Followers  |  328 Following  |  67 Posts  |  Joined: 16.08.2023  |  2.5443

Latest posts by mheilbron.bsky.social on Bluesky

so nice to see this out sush!!

19.11.2025 08:47 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...

๐ŸšจNew Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
๐Ÿงต 1/14

18.11.2025 12:34 โ€” ๐Ÿ‘ 82    ๐Ÿ” 28    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 5

archive.ph/smEj0 (or, unpaywalled ๐Ÿคซ)

07.11.2025 10:32 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The Case That A.I. Is Thinking ChatGPT does not have an inner life. Yet it seems to know what itโ€™s talking about.

This is, without a doubt, the best popular article about current state of AI. And on whether LLMs are truly 'thinking' or 'understanding' -- and what that question even means

www.newyorker.com/magazine/202...

07.11.2025 10:32 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

omg. what journal? name and shame

19.09.2025 12:34 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

huh! if these effects are similar and consistent, I think it should work, but the q. is how do you get a vector representation for novel pseudowords? we currently use lexicosemantic word vectors and they are undefined for novel words.

so how to represent the novel words? v. interesting test case

19.09.2025 12:32 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@nicolecrust.bsky.social might be of interest

18.09.2025 11:52 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

New paper on memorability, with @davogelsang.bsky.social !

18.09.2025 10:45 โ€” ๐Ÿ‘ 10    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Representational magnitude as a geometric signature of image and word memorability What makes some stimuli more memorable than others? While memory varies across individuals, research shows that some items are intrinsically more memorable, a property quantifiable as โ€œmemorabilityโ€. ...

New preprint out together with @mheilbron.bsky.social

We find that a stimulus' representational magnitudeโ€”the L2 norm of its DNN representationโ€”predicts intrinsic memorability not just for images, but for words too.
www.biorxiv.org/content/10.1...

18.09.2025 09:53 โ€” ๐Ÿ‘ 23    ๐Ÿ” 6    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1

Together, our results support a classic idea: cognitive limitations can be a powerful inductive bias for learning

Yet they also reveal a curious distinction: a model with more human-like *constraints* is not necessarily more human-like in its predictions

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This paradox โ€“ better language models yielding worse behavioural predictions โ€“ could not be explained by prior explanations: The mechanism appears distinct from those linked to superhuman training scale or memorisation

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

However, we then used these models to predict human behaviour

Strikingly these same models that were demonstrably better at the language task, were worse at predicting human reading behaviour

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The benefit was robust

Fleeting memory models achieved better next-token prediction (lower loss) and better syntactic knowledge (higher accuracy) on the BLiMP benchmark

This was consistent across seeds and for both 10M and 100M training sets

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

But we noticed this naive decay was too strong

Human memory has a brief 'echoic' buffer that perfectly preserves the immediate past. When we added this โ€“ a short window of perfect retention before the decay -- the pattern flipped

Now, fleeting memory *helped* (lower loss)

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our first attempt, a "naive" memory decay starting from the most recent word, actually *impaired* language learning. Models with this decay had higher validation loss, and this worsened (even higher loss) as the decay became stronger

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

To test this in a modern context, we propose the โ€˜fleeting memory transformerโ€™

We applied a power-law memory decay to the self-attention scores, simulating how access to past words fades over time, and ran controlled experiments on the developmentally realistic BabyLM corpus

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

However, this appears difficult to reconcile with the success of transformers, which can learn language very effectively, despite lacking working memory limitations or other recency biases

Would the blessing of fleeting memory still hold in transformer language models?

18.08.2025 12:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

A core idea in cognitive science is that the fleetingness of working memory isn't a flaw

It may actually help at learning language by forcing a focus on the recent past and providing an incentive to discover abstract structure rather than surface details

18.08.2025 12:40 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Human-like fleeting memory improves language learning but impairs reading time prediction in transformer language models Human memory is fleeting. As words are processed, the exact wordforms that make up incoming sentences are rapidly lost. Cognitive scientists have long believed that this limitation of memory may, para...

New preprint! w/@drhanjones.bsky.social

Adding human-like memory limitations to transformers improves language learning, but impairs reading time prediction

This supports ideas from cognitive science but complicates the link between architecture and behavioural prediction
arxiv.org/abs/2508.05803

18.08.2025 12:40 โ€” ๐Ÿ‘ 10    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Poster Presentation

On Wednesday, Maithe van Noort will present a poster on โ€œCompositional Meaning in Vision-Language Models and the Brainโ€

First results from a much larger project on visual and linguistic meaning in brains and machines, with many collaborators -- more to come! โ€จ
t.ly/TWsyT

12.08.2025 11:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Poster Presentation

On Friday, during a contributed talk (and a poster), @wiegerscheurer will present the project he spearheaded: โ€œA hierarchy of spatial predictions across human visual cortex during natural visionโ€ โ€จโ€จ(Full preprint soon)

t.ly/fTJqy

12.08.2025 11:14 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

CCN has arrived here here in Amsterdam!

Come find me to meet or catch up

Some highlights from students and collaborators:

12.08.2025 11:14 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Waarom kom ik toch niet op die naam? Eh, je weet wel wie... dinges! Heb jij ook soms zoโ€™n moeite om op een naam te komen? Hersenonderzoeker Micha Heilbron legt uit hoe dat komt - en waarom een naam eigenlijk niet zo belangrijk is.

Waarom vergeet je namen maar weet je nog precies wat iemand doet? En zijn herinneringen ooit echt helemaal weg?

Ik ging bij Oplossing Gezocht in gesprek over hoe ons brein informatie opslaat en waarom vergeten eigenlijk heel slim is:
www.nemokennislink.nl/publicaties/...

15.07.2025 08:47 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Exciting new preprint from the lab: โ€œAdopting a human developmental visual diet yields robust, shape-based AI visionโ€. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168

08.07.2025 13:03 โ€” ๐Ÿ‘ 139    ๐Ÿ” 59    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 11
Preview
Higher-level spatial prediction in natural vision across mouse visual cortex Theories of predictive processing propose that sensory systems constantly predict incoming signals, based on spatial and temporal context. However, evidence for prediction in sensory cortex largely co...

New preprint, w/ @predictivebrain.bsky.social !

we've found that visual cortex, even when just viewing natural scenes, predicts *higher-level* visual features

The aligns with developments in ML, but challenges some assumptions about early sensory cortex

www.biorxiv.org/content/10.1...

23.05.2025 11:39 โ€” ๐Ÿ‘ 81    ๐Ÿ” 34    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

iโ€™m all in the โ€œthis is a neat way to help explain thingsโ€ camp fwiw :)

23.05.2025 15:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Our findings, together with some other recent studies, suggest the brain may use a similar strategy โ€” constantly predicting higher-level features โ€” to efficiently learn robust visual representations of (and from!) the natural world

23.05.2025 11:39 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This preference for higher-level information departs from traditional predictive coding -- but aligns with recent, successful algorithms in AI for predictive self-supervised learning, which encourage predicting higher rather than lower-level visual features (e.g. MAE, CPC, JEPA)

23.05.2025 11:39 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

So, what does this all mean?

The visual system seems to be constantly engaged in a sophisticated guessing game, predicting sensory input based on context

But interestingly, it seems to predict more abstract, higher-level properties, even in the earliest stages of cortex

23.05.2025 11:39 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Remarkably, these prediction effects appeared independent of recent experience with the specific images presented

This suggests they rely on long-term, ingrained priors about the statistical structure of the visual world, rather than on recent exposure to these specific images

23.05.2025 11:39 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@mheilbron is following 20 prominent accounts