A long time coming, now out in @natmachintell.nature.com: Visual representations in the human brain are aligned with large language models.
Check it out (and come chat with us about it at CCN).
07.08.2025 14:16 — 👍 15 🔁 0 💬 0 📌 0
For completeness sake: we know the other team and cite both of their papers in the preprint.
26.07.2025 18:46 — 👍 1 🔁 0 💬 1 📌 0
Devil is in the details as usual.
They (and others) focused on acuity, while we show that the actual gains do not come from acuity but the development of contrast sensitivity.
26.07.2025 18:44 — 👍 1 🔁 0 💬 2 📌 0
To be honest, so far it has exceeded our expectations across the board.
A big surprise was that visual acuity (i.e. initial blurring) had so little impact. This is what others had focused on in the past. Instead, the development of contrast sensitivity gets you most of the way there.
08.07.2025 21:00 — 👍 8 🔁 0 💬 1 📌 0
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
08.07.2025 13:03 — 👍 125 🔁 56 💬 3 📌 10
Thank you!
08.07.2025 16:20 — 👍 1 🔁 0 💬 0 📌 0
We are incredibly excited about this because DVD may offer a resource-efficient path towards safer, more human-like AI vision — and suggests that biology, neuroscience, and psychology have much to offer in guiding the next generation of artificial intelligence. #NeuroAI #AI /fin
08.07.2025 13:03 — 👍 9 🔁 1 💬 2 📌 0
In summary, DVD-training yields models that rely on a fundamentally different feature set, shifting from distributed local textures to integrative, shape-based features as the foundation for their decisions. 9/
08.07.2025 13:03 — 👍 7 🔁 1 💬 1 📌 0
Result 4: How about adversarial robustness? DVD-trained models also showed greater resilience to all black- and white-box attacks tested, performing 3–5 times better than baselines under high-strength perturbations. 8/
08.07.2025 13:03 — 👍 3 🔁 0 💬 1 📌 0
Result 3: DVD-trained models exhibit more human-like robustness to Gaussian blur compared to baselines, plus an overall improved robustness to all image perturbations tested. 7/
08.07.2025 13:03 — 👍 3 🔁 0 💬 1 📌 0
Result 2: DVD-training enabled abstract shape recognition in cases where AI frontier models, despite being explicitly prompted, fail spectacularly.
t-SNE nicely visualises the fundamentally different approach of DVD-trained models. 6/
08.07.2025 13:03 — 👍 4 🔁 0 💬 1 📌 0
Layerwise relevance propagation revealed that DVD-training resulted in a different recognition strategy than baseline controls: DVD-training puts emphasis on large parts of the objects, rather than highly localised or highly distributed features. 5/
08.07.2025 13:03 — 👍 5 🔁 0 💬 1 📌 0
Result 1: DVD training massively improves shape-reliance in ANNs.
We report a new state of the art, reaching human-level shape-bias (even though the model uses orders of magnitude less data and parameters). This was true for all datasets and architectures tested 4/
08.07.2025 13:03 — 👍 7 🔁 1 💬 1 📌 0
We then test the resulting DNNs across a range of conditions, each selected because they are challenging to AI: (i) shape-texture bias, (ii) recognising abstract shapes embedded in complex backgrounds, (iii) robustness to image perturbations, and (iv) adversarial robustness, 3/
08.07.2025 13:03 — 👍 3 🔁 0 💬 1 📌 0
The idea: instead of high-fidelity training from the get-go (the gold standard), we simulate the visual development from newborns to 25 years of age by synthesising decades of developmental vision research into an AI preprocessing pipeline (Developmental Visual Diet - DVD) 2/
08.07.2025 13:03 — 👍 7 🔁 0 💬 1 📌 1
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
08.07.2025 13:03 — 👍 125 🔁 56 💬 3 📌 10
Do you have a link to the paper on ofc?
25.06.2025 18:40 — 👍 0 🔁 0 💬 1 📌 0
Yes, great suggestion. Recurrent ANNs with top-down and lateral connectivity are the other line of research in the lab, we just haven't gotten around to putting the two together.
07.06.2025 12:02 — 👍 1 🔁 0 💬 0 📌 0
Not that I know off the top of my head, but it would be relatively straight forward to do.
07.06.2025 04:52 — 👍 0 🔁 0 💬 0 📌 0
Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇
06.06.2025 11:00 — 👍 40 🔁 8 💬 1 📌 0
Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇
06.06.2025 11:00 — 👍 40 🔁 8 💬 1 📌 0
Can seemingly complex multi-area computations in the brain emerge from the need for energy efficient computation? In our new preprint on predictive remapping in active vision, we report on such a case.
Let us take you for a spin. 1/6 www.biorxiv.org/content/10.1...
05.06.2025 13:14 — 👍 37 🔁 14 💬 1 📌 2
In summary, optimising for energy efficiency led to signatures of predictive remapping, implemented in the model via a translation from relative to absolute eye-position codes and inhibitory prediction. No genetic hardwiring required. Work with @thonor.bsky.social & @psulewski.bsky.social . /fin
05.06.2025 13:14 — 👍 1 🔁 0 💬 0 📌 0
Second, we found these computations rely on just 0.5% of units. These units had learned to transform relative saccade targets into a world-centered reference frame. Lesioning them collapsed predictive remapping entirely. Instead, the model predicted the current fixation to also be the next. 5/6
05.06.2025 13:14 — 👍 2 🔁 0 💬 1 📌 0
We make two important observations on emergent properties: First, the RNNs spontaneously learn to predict and inhibit upcoming fixation content. Energy minimisation alone drove sophisticated predictive computations supporting visual stability. 4/6
05.06.2025 13:14 — 👍 2 🔁 0 💬 1 📌 0
Is this capacity genetically hardwired, or can it emerge from simpler principles? To find out, we trained RNNs on human-like fixation sequences (image patches and efference copies) on natural scenes. Only constraint: minimise energy consumption (unit preactivation). 3/6
05.06.2025 13:14 — 👍 2 🔁 0 💬 1 📌 0
Perceptual stability, despite our constant eye-movements, requires complex predictive computations, often summarised as predictive remapping. This is a challenging task that requires predicting visual features across spatial transformations derived from relative eye-movement coordinates. 2/6
05.06.2025 13:14 — 👍 2 🔁 0 💬 1 📌 0
Prof. @ucsantabarbara.bsky.social - Runs a lab slslab.org - Works on computation, neuroscience, behavior, vision, optics, imaging, 2p / multiphoton, optical computing, machine learning / AI - Blogs at labrigger.com - Founded @pacificoptica.bsky.social
Simons Postdoctoral Fellow in Pawan Sinha's Lab at MIT. Experimental and computational approaches to vision, time, and development. Just joined Bluesky!
Machine Learning and Neuroscience Lab @ Uni Tuebingen, PI: Matthias Bethge, bethgelab.org
The goal of our research is to understand how brain states shape decision-making, and how this process goes awry in certain neurological & psychiatric disorders
| tobiasdonner.net | University Medical Center Hamburg-Eppendorf, Germany
Director of the Vision Learning and Development Lab at Temple University.
Interested in cognition, computation, neuroscience, and development.
https://vlad-lab.com/
Computational vision. Deep learning. Center for Computational Brain Science @Brown University. Artificial and Natural Intelligence Toulouse Institute (France). European Laboratory for Learning and Intelligent Systems (ELLIS).
AI Researcher, Writer
Stanford
jaredmoore.org
https://unireps.org
Discover why, when and how distinct learning processes yield similar representations, and the degree to which these can be unified.
UCL NeuroAI fosters collaboration between our neuroscience and AI communities. https://www.ucl.ac.uk/research/domains/neuroscience/ucl-neuroai
Flatiron Research Fellow #FlatironCCN. PhD from #mitbrainandcog. Incoming Asst Prof #CarnegieMellon in Fall 2025. I study how humans and computers hear and see.
M.Sc. Student in Cognitive Science @UniOsnabrück
Computational modeling · Exploring cognition via ML & neuroscience
Interested in: cognitive control • attention • eye movements
Institute for Brain and Behaviour Amsterdam. Promoting interdisciplinary research between psychological and movement sciences.
Working on active perception & cognition at Humboldt-Universität zu Berlin and Science of Intelligence cluster. Part of Berlin School of Mind & Brain, Bernstein Center for Computational Neuroscience Berlin, and Einstein Center Berlin.
@rolfslab on X
I’ve boldly gone into the clear blue yonder. Follow for more recipes and tips.
AI x neuroscience.
🌊 www.rdgao.com
Postdoc @UCSD | working memory 🧠
NeuroAI, Deep Learning for neuroscience, visual system in mice and monkeys, computational lab based in Göttingen (Germany), https://sinzlab.org
Professor at JLU | cognitive computational neuroscientist | mom of 3 | she/her
Professor of Data Science @ University of Tübingen, Director of Hertie AI (www.hertie.ai) and Speaker of ML4Science (www.machinelearningforscience.de)
data science postdoc in Tübingen 🧬🖥️🧠 scRNA data analysis, UMAP/tSNE & retina neuroscience | science journalism on AI & sustainability 🤖❤️🌍 | easily sidetracked by small plot details & cool birds 📈🔍🦜