Tim Kietzmann 's Avatar

Tim Kietzmann

@timkietzmann.bsky.social

ML meets Neuroscience #NeuroAI, Full Professor at the Institute of Cognitive Science (Uni Osnabrück), prev. @ Donders Inst., Cambridge University

3,083 Followers  |  323 Following  |  78 Posts  |  Joined: 15.08.2024  |  2.4556

Latest posts by timkietzmann.bsky.social on Bluesky

Preview
Using AI to 'see' what we see Fed the right information, large language models can match what the brain sees when it takes in an everyday scene such as children playing or a big city skyline, a new study led by Ian Charest finds.

#AI "Ultimately, this is a step forward in understanding how the human brain understands meaning from the visual world." #LLMs @mila-quebec.bsky.social @adriendoerig.bsky.social @timkietzmann.bsky.social @natmachintell.nature.com
nouvelles.umontreal.ca/en/article/2...

07.08.2025 19:54 — 👍 4    🔁 2    💬 0    📌 0

A long time coming, now out in @natmachintell.nature.com: Visual representations in the human brain are aligned with large language models.

Check it out (and come chat with us about it at CCN).

07.08.2025 14:16 — 👍 15    🔁 0    💬 0    📌 0

For completeness sake: we know the other team and cite both of their papers in the preprint.

26.07.2025 18:46 — 👍 1    🔁 0    💬 1    📌 0

Devil is in the details as usual.

They (and others) focused on acuity, while we show that the actual gains do not come from acuity but the development of contrast sensitivity.

26.07.2025 18:44 — 👍 1    🔁 0    💬 2    📌 0

To be honest, so far it has exceeded our expectations across the board.

A big surprise was that visual acuity (i.e. initial blurring) had so little impact. This is what others had focused on in the past. Instead, the development of contrast sensitivity gets you most of the way there.

08.07.2025 21:00 — 👍 8    🔁 0    💬 1    📌 0

Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168

08.07.2025 13:03 — 👍 125    🔁 56    💬 3    📌 10

Thank you!

08.07.2025 16:20 — 👍 1    🔁 0    💬 0    📌 0

We are incredibly excited about this because DVD may offer a resource-efficient path towards safer, more human-like AI vision — and suggests that biology, neuroscience, and psychology have much to offer in guiding the next generation of artificial intelligence. #NeuroAI #AI /fin

08.07.2025 13:03 — 👍 9    🔁 1    💬 2    📌 0

In summary, DVD-training yields models that rely on a fundamentally different feature set, shifting from distributed local textures to integrative, shape-based features as the foundation for their decisions. 9/

08.07.2025 13:03 — 👍 7    🔁 1    💬 1    📌 0
Post image

Result 4: How about adversarial robustness? DVD-trained models also showed greater resilience to all black- and white-box attacks tested, performing 3–5 times better than baselines under high-strength perturbations. 8/

08.07.2025 13:03 — 👍 3    🔁 0    💬 1    📌 0
Post image

Result 3: DVD-trained models exhibit more human-like robustness to Gaussian blur compared to baselines, plus an overall improved robustness to all image perturbations tested. 7/

08.07.2025 13:03 — 👍 3    🔁 0    💬 1    📌 0
Post image Post image

Result 2: DVD-training enabled abstract shape recognition in cases where AI frontier models, despite being explicitly prompted, fail spectacularly.

t-SNE nicely visualises the fundamentally different approach of DVD-trained models. 6/

08.07.2025 13:03 — 👍 4    🔁 0    💬 1    📌 0
Post image

Layerwise relevance propagation revealed that DVD-training resulted in a different recognition strategy than baseline controls: DVD-training puts emphasis on large parts of the objects, rather than highly localised or highly distributed features. 5/

08.07.2025 13:03 — 👍 5    🔁 0    💬 1    📌 0
Post image Post image

Result 1: DVD training massively improves shape-reliance in ANNs.

We report a new state of the art, reaching human-level shape-bias (even though the model uses orders of magnitude less data and parameters). This was true for all datasets and architectures tested 4/

08.07.2025 13:03 — 👍 7    🔁 1    💬 1    📌 0
Post image

We then test the resulting DNNs across a range of conditions, each selected because they are challenging to AI: (i) shape-texture bias, (ii) recognising abstract shapes embedded in complex backgrounds, (iii) robustness to image perturbations, and (iv) adversarial robustness, 3/

08.07.2025 13:03 — 👍 3    🔁 0    💬 1    📌 0
Post image

The idea: instead of high-fidelity training from the get-go (the gold standard), we simulate the visual development from newborns to 25 years of age by synthesising decades of developmental vision research into an AI preprocessing pipeline (Developmental Visual Diet - DVD) 2/

08.07.2025 13:03 — 👍 7    🔁 0    💬 1    📌 1

Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168

08.07.2025 13:03 — 👍 125    🔁 56    💬 3    📌 10

Do you have a link to the paper on ofc?

25.06.2025 18:40 — 👍 0    🔁 0    💬 1    📌 0
Preview
End-to-end topographic networks as models of cortical map formation and human visual behaviour - Nature Human Behaviour Lu et al. introduce all-topographic neural networks as a parsimonious model of the human visual cortex.

Nice paper by @zejinlu.bsky.social the group of @timkietzmann.bsky.social appearing in Nat Human Behav www.nature.com/articles/s41... showing the properties of a CNN for which you release the weight sharing constraint.. #neuroAI

16.06.2025 06:19 — 👍 7    🔁 1    💬 1    📌 0

Yes, great suggestion. Recurrent ANNs with top-down and lateral connectivity are the other line of research in the lab, we just haven't gotten around to putting the two together.

07.06.2025 12:02 — 👍 1    🔁 0    💬 0    📌 0

Not that I know off the top of my head, but it would be relatively straight forward to do.

07.06.2025 04:52 — 👍 0    🔁 0    💬 0    📌 0
Preview
End-to-end topographic networks as models of cortical map formation and human visual behaviour - Nature Human Behaviour Lu et al. introduce all-topographic neural networks as a parsimonious model of the human visual cortex.

Working link: www.nature.com/articles/s41...

06.06.2025 19:05 — 👍 0    🔁 0    💬 0    📌 0

Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇

06.06.2025 11:00 — 👍 40    🔁 8    💬 1    📌 0

Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇

06.06.2025 11:00 — 👍 40    🔁 8    💬 1    📌 0
Post image

Can seemingly complex multi-area computations in the brain emerge from the need for energy efficient computation? In our new preprint on predictive remapping in active vision, we report on such a case.

Let us take you for a spin. 1/6 www.biorxiv.org/content/10.1...

05.06.2025 13:14 — 👍 37    🔁 14    💬 1    📌 2

In summary, optimising for energy efficiency led to signatures of predictive remapping, implemented in the model via a translation from relative to absolute eye-position codes and inhibitory prediction. No genetic hardwiring required. Work with @thonor.bsky.social & @psulewski.bsky.social . /fin

05.06.2025 13:14 — 👍 1    🔁 0    💬 0    📌 0
Post image

Second, we found these computations rely on just 0.5% of units. These units had learned to transform relative saccade targets into a world-centered reference frame. Lesioning them collapsed predictive remapping entirely. Instead, the model predicted the current fixation to also be the next. 5/6

05.06.2025 13:14 — 👍 2    🔁 0    💬 1    📌 0
Post image

We make two important observations on emergent properties: First, the RNNs spontaneously learn to predict and inhibit upcoming fixation content. Energy minimisation alone drove sophisticated predictive computations supporting visual stability. 4/6

05.06.2025 13:14 — 👍 2    🔁 0    💬 1    📌 0
Post image

Is this capacity genetically hardwired, or can it emerge from simpler principles? To find out, we trained RNNs on human-like fixation sequences (image patches and efference copies) on natural scenes. Only constraint: minimise energy consumption (unit preactivation). 3/6

05.06.2025 13:14 — 👍 2    🔁 0    💬 1    📌 0

Perceptual stability, despite our constant eye-movements, requires complex predictive computations, often summarised as predictive remapping. This is a challenging task that requires predicting visual features across spatial transformations derived from relative eye-movement coordinates. 2/6

05.06.2025 13:14 — 👍 2    🔁 0    💬 1    📌 0

@timkietzmann is following 20 prominent accounts