Mick Bonner's Avatar

Mick Bonner

@mickbonner.bsky.social

Assistant Professor of Cognitive Science at Johns Hopkins. My lab studies human vision using cognitive neuroscience and machine learning. bonnerlab.org

277 Followers  |  139 Following  |  40 Posts  |  Joined: 11.12.2025  |  2.236

Latest posts by mickbonner.bsky.social on Bluesky

I don't disagree with that point, but at the same time, you can think of this from another perspective: Isn't it crazy that despite the many complex nonlinear transformations implemented by seemingly different models, they nonetheless arrive at something that is similar up to a linear transform?

10.02.2026 17:32 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

More to come. We are working on a paper now that characterizes these issues in more depth.

10.02.2026 17:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Universal scale-free representations in human visual cortex Author summary The human cerebral cortex is thought to encode sensory information in population activity patterns, but the statistical structure of these population codes has yet to be characterized. ...

And Fig. 7 in this paper. journals.plos.org/ploscompbiol...

10.02.2026 17:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Universal dimensions of visual representation Probing neural representations reveals universal aspects of vision in artificial and biological networks.

This number is based on what we have seen in analyses in my lab. Some examples are Fig. 5 of in this paper...
www.science.org/doi/10.1126/...

10.02.2026 17:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Second, if the only thing that differentiates two alternative models is a simple linear reweighing, it raises a question of how important their differences really are. It may be more informative in the end to focus on understanding what the models have in common than how they differ.

10.02.2026 16:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3. We have been thinking about this. The answer is not straightforward. First, RSA is effectively insensitive to anything beyond the first 5-10 PCs in brain and network representations, and I happen to think there is much more to the story than just a handful of dimensions. bsky.app/profile/mick...

10.02.2026 16:55 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

1. Yes, trained networks are much better when using RSA. We show this in a supplementary analysis.
2. We have never computed this exact quantify. But we did show that if you do PCA on wide untrained networks, you can drastically reduce their dimensionality while still retaining their performance.

10.02.2026 16:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Although pre-trained networks can be super useful for comp neuro, the surprising success of untrained networks suggests that there may be still be much to learn by focusing on simpler approaches. We shouldn't be focusing all our attention on the latest DNN models coming out of the ML world.

10.02.2026 14:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

These architectural manipulations were things that you wouldn’t typically think to try if your primary focus was on trained networks. We wrote about this in our discussion.

10.02.2026 14:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Importantly, one of the things we learned in that work was that the field hasn’t been giving untrained networks the best chance possible. We found that fairly simple architectural manipulations could dramatically improve their performance.

10.02.2026 14:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Convolutional architectures are cortex-aligned de novo - Nature Machine Intelligence Kazemian et al. report that untrained convolutional networks with wide layers predict primate visual cortex responses nearly as well as task-optimized networks, revealing how architectural constraints...

That's true. But untrained networks can do surprisingly well. In a recent paper, we found that untrained networks can rival trained networks in a key monkey dataset. In the human data we examined, there was still a gap relative to pre-trained models, as you point out. www.nature.com/articles/s42...

10.02.2026 14:56 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Deep learning in fetal, infant, and toddler neuroimaging research Artificial intelligence (AI) is increasingly being integrated into everyday tasks and work environments. However, its adoption in medical image analys…

This paper was an awesome collaborative effort of a @fitngin.bsky.social working group. It provides a detailed review of how DNNs can be used to support dev neuro research

@lauriebayet.bsky.social and I wrote the network modeling section about how DNNs can be used to test developmental theories 🧡

28.01.2026 15:08 β€” πŸ‘ 27    πŸ” 14    πŸ’¬ 2    πŸ“Œ 1

Infants organise their visual world into categories at two-months-old! So happy to see these results published - congratulations Cliona and the rest of the FOUNDCOG team.

02.02.2026 16:39 β€” πŸ‘ 30    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

New paper from our lab on the behavioral significance of high-dimensional neural representations!

30.01.2026 18:57 β€” πŸ‘ 16    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Vacancy β€” PhD Position in NeuroAI for Video Perception in the Human Brain <p><span>Are you interested in using AI to unravel the mysteries of the brain? Do you want to perform cutting-edge NeuroAI research and leverage deep learning to understand human vision? Then check out the vacancy below and apply for a PhD position in this exciting research direction.</span></p>

I have a PhD opening for my #VIDI BrainShorts project πŸ“½οΈπŸ§ πŸ€–! Are you or do you know an ambitious, recent (or almost) MSc graduate with a background in NeuroAI and interest in large-scale data collection and video perception? Check out our vacancy! (deadline Feb 15).
werkenbij.uva.nl/en/vacancies...

16.01.2026 12:31 β€” πŸ‘ 30    πŸ” 26    πŸ’¬ 1    πŸ“Œ 0

Wonderful article about our recent paper in @pnasnexus.org! Thanks, @sachapfeiffer.bsky.social and @mickbonner.bsky.social!

@yikai-tang.bsky.social @uoftpsychology.bsky.social @artsci.utoronto.ca @utoronto.ca

09.01.2026 21:18 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Our new paper in @sfnjournals.bsky.social shows different neural systems for integrating views into places--PPA integrates views *of* a location (e.g., views of a landmark), while RSC integrates views *from* a location (e.g., views of a panorama). Work by the bluesky-less Linfeng Tony Han.

07.01.2026 17:11 β€” πŸ‘ 36    πŸ” 16    πŸ’¬ 2    πŸ“Œ 0
Preview
Why isn’t modern AI built around principles from cognitive science? First post in a series on cognitive science and AI

Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3

16.12.2025 15:40 β€” πŸ‘ 117    πŸ” 34    πŸ’¬ 4    πŸ“Œ 5
Preview
Lindsay Lab - Postdoc Position Artificial neural networks applied to psychology, neuroscience, and climate change

Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro

08.12.2025 23:53 β€” πŸ‘ 125    πŸ” 91    πŸ’¬ 2    πŸ“Œ 0

As for what other inductive biases will prove to be important, this is still TBD. I think that wiring costs (e.g., topography) may be one.

15.12.2025 19:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But neuroscientists and AI engineers have different goals! Neuroscientists should be seeking parsimonious theories, not high-performing models.

15.12.2025 19:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Importantly, to get this to work, NeuroAI researchers have to back to the drawing board and search for simpler approaches. I think that currently, we are relying too much on the tools and models coming out of AI. It makes it seem like the only feasible approach is whatever currently works in AI.

15.12.2025 19:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The simple-local-learning goal is certainly non-trivial! But recent findings (especially universality of network representations) suggest that it has potential.

15.12.2025 19:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What might such a theory look like? My bet is that it will be one that combines strong architectural inductive biases with fully unsupervised learning algorithms that operate without the need for backpropagation. This is a very different direction than where AI and NeuroAI are currently headed.

15.12.2025 18:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Although the deep learning revolution in vision science started with task-based optimization, there are intriguing signs that a far more parsimonious computational theory of the visual hierarchy is attainable.

15.12.2025 18:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These universal representations are not restricted to early network layers. We see them across the full depth of the networks that we examined. Their strong universality and independence of task demands calls out for a parsimonious explanation that has yet to discovered.

15.12.2025 18:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Universal dimensions of visual representation Probing neural representations reveals universal aspects of vision in artificial and biological networks.

A second paper from my lab adds another element to this story: after training, many diverse DNNs converge to universal features that are independent of the tasks they were trained on. It is these universal features that are most strongly shared with visual cortex. www.science.org/doi/10.1126/...

15.12.2025 18:44 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Structure as an inductive bias for brain–model alignment - Nature Machine Intelligence Even before training, convolutional neural networks may reflect the brain’s visual processing principles. A study now shows how structure alone can help to explain the alignment between brains and mod...

What does this mean? It suggests that architectural inductive biases alone can get us surprisingly far in explaining the image representations of the ventral stream. See a great commentary by @binxuwang.bsky.social wang.bsky.social and Carlos Ponce. www.nature.com/articles/s42...

15.12.2025 18:44 β€” πŸ‘ 14    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Second, similar manipulations in other architectures were relatively ineffectiveβ€”the effects were specific to convolutional architectures and relied critically on the use of spatially local filters.

15.12.2025 18:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These results could not simply be explained by high-dimensional regression. First, we could drastically reduce the dimensionality of wide layers through PCA while still retaining strong performance.

15.12.2025 18:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@mickbonner is following 20 prominent accounts