I don't disagree with that point, but at the same time, you can think of this from another perspective: Isn't it crazy that despite the many complex nonlinear transformations implemented by seemingly different models, they nonetheless arrive at something that is similar up to a linear transform?
10.02.2026 17:32 β π 4 π 0 π¬ 1 π 0
More to come. We are working on a paper now that characterizes these issues in more depth.
10.02.2026 17:25 β π 1 π 0 π¬ 1 π 0
Second, if the only thing that differentiates two alternative models is a simple linear reweighing, it raises a question of how important their differences really are. It may be more informative in the end to focus on understanding what the models have in common than how they differ.
10.02.2026 16:55 β π 3 π 0 π¬ 1 π 0
3. We have been thinking about this. The answer is not straightforward. First, RSA is effectively insensitive to anything beyond the first 5-10 PCs in brain and network representations, and I happen to think there is much more to the story than just a handful of dimensions. bsky.app/profile/mick...
10.02.2026 16:55 β π 5 π 0 π¬ 2 π 0
1. Yes, trained networks are much better when using RSA. We show this in a supplementary analysis.
2. We have never computed this exact quantify. But we did show that if you do PCA on wide untrained networks, you can drastically reduce their dimensionality while still retaining their performance.
10.02.2026 16:55 β π 2 π 0 π¬ 1 π 0
Although pre-trained networks can be super useful for comp neuro, the surprising success of untrained networks suggests that there may be still be much to learn by focusing on simpler approaches. We shouldn't be focusing all our attention on the latest DNN models coming out of the ML world.
10.02.2026 14:56 β π 2 π 0 π¬ 0 π 0
These architectural manipulations were things that you wouldnβt typically think to try if your primary focus was on trained networks. We wrote about this in our discussion.
10.02.2026 14:56 β π 1 π 0 π¬ 1 π 0
Importantly, one of the things we learned in that work was that the field hasnβt been giving untrained networks the best chance possible. We found that fairly simple architectural manipulations could dramatically improve their performance.
10.02.2026 14:56 β π 2 π 0 π¬ 1 π 0
Convolutional architectures are cortex-aligned de novo - Nature Machine Intelligence
Kazemian et al. report that untrained convolutional networks with wide layers predict primate visual cortex responses nearly as well as task-optimized networks, revealing how architectural constraints...
That's true. But untrained networks can do surprisingly well. In a recent paper, we found that untrained networks can rival trained networks in a key monkey dataset. In the human data we examined, there was still a gap relative to pre-trained models, as you point out. www.nature.com/articles/s42...
10.02.2026 14:56 β π 3 π 0 π¬ 2 π 0
Deep learning in fetal, infant, and toddler neuroimaging research
Artificial intelligence (AI) is increasingly being integrated into everyday tasks and work environments. However, its adoption in medical image analysβ¦
This paper was an awesome collaborative effort of a @fitngin.bsky.social working group. It provides a detailed review of how DNNs can be used to support dev neuro research
@lauriebayet.bsky.social and I wrote the network modeling section about how DNNs can be used to test developmental theories π§΅
28.01.2026 15:08 β π 27 π 14 π¬ 2 π 1
Infants organise their visual world into categories at two-months-old! So happy to see these results published - congratulations Cliona and the rest of the FOUNDCOG team.
02.02.2026 16:39 β π 30 π 3 π¬ 1 π 0
New paper from our lab on the behavioral significance of high-dimensional neural representations!
30.01.2026 18:57 β π 16 π 2 π¬ 0 π 1
Wonderful article about our recent paper in @pnasnexus.org! Thanks, @sachapfeiffer.bsky.social and @mickbonner.bsky.social!
@yikai-tang.bsky.social @uoftpsychology.bsky.social @artsci.utoronto.ca @utoronto.ca
09.01.2026 21:18 β π 13 π 3 π¬ 1 π 0
Our new paper in @sfnjournals.bsky.social shows different neural systems for integrating views into places--PPA integrates views *of* a location (e.g., views of a landmark), while RSC integrates views *from* a location (e.g., views of a panorama). Work by the bluesky-less Linfeng Tony Han.
07.01.2026 17:11 β π 36 π 16 π¬ 2 π 0
Why isnβt modern AI built around principles from cognitive science?
First post in a series on cognitive science and AI
Why isnβt modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
16.12.2025 15:40 β π 117 π 34 π¬ 4 π 5
Lindsay Lab - Postdoc Position
Artificial neural networks applied to psychology, neuroscience, and climate change
Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro
08.12.2025 23:53 β π 125 π 91 π¬ 2 π 0
As for what other inductive biases will prove to be important, this is still TBD. I think that wiring costs (e.g., topography) may be one.
15.12.2025 19:57 β π 3 π 0 π¬ 1 π 0
But neuroscientists and AI engineers have different goals! Neuroscientists should be seeking parsimonious theories, not high-performing models.
15.12.2025 19:57 β π 3 π 0 π¬ 1 π 0
Importantly, to get this to work, NeuroAI researchers have to back to the drawing board and search for simpler approaches. I think that currently, we are relying too much on the tools and models coming out of AI. It makes it seem like the only feasible approach is whatever currently works in AI.
15.12.2025 19:57 β π 3 π 0 π¬ 1 π 0
The simple-local-learning goal is certainly non-trivial! But recent findings (especially universality of network representations) suggest that it has potential.
15.12.2025 19:57 β π 1 π 0 π¬ 1 π 0
What might such a theory look like? My bet is that it will be one that combines strong architectural inductive biases with fully unsupervised learning algorithms that operate without the need for backpropagation. This is a very different direction than where AI and NeuroAI are currently headed.
15.12.2025 18:44 β π 3 π 0 π¬ 1 π 0
Although the deep learning revolution in vision science started with task-based optimization, there are intriguing signs that a far more parsimonious computational theory of the visual hierarchy is attainable.
15.12.2025 18:44 β π 2 π 0 π¬ 1 π 0
These universal representations are not restricted to early network layers. We see them across the full depth of the networks that we examined. Their strong universality and independence of task demands calls out for a parsimonious explanation that has yet to discovered.
15.12.2025 18:44 β π 1 π 0 π¬ 1 π 0
Universal dimensions of visual representation
Probing neural representations reveals universal aspects of vision in artificial and biological networks.
A second paper from my lab adds another element to this story: after training, many diverse DNNs converge to universal features that are independent of the tasks they were trained on. It is these universal features that are most strongly shared with visual cortex. www.science.org/doi/10.1126/...
15.12.2025 18:44 β π 3 π 1 π¬ 1 π 0
Second, similar manipulations in other architectures were relatively ineffectiveβthe effects were specific to convolutional architectures and relied critically on the use of spatially local filters.
15.12.2025 18:44 β π 3 π 0 π¬ 1 π 0
These results could not simply be explained by high-dimensional regression. First, we could drastically reduce the dimensionality of wide layers through PCA while still retaining strong performance.
15.12.2025 18:44 β π 2 π 0 π¬ 1 π 0
comp neuro, neural manifolds, neuroAI, physics of learning
assistant professor @ harvard (physics, center for brain science, kempner institute) + @ Flatiron Institute
https://www.sychung.org
Professor at the Gatsby Unit and Sainsbury Wellcome Centre, UCL, trying to figure out how we learn
Researcher in Neuroscience & AI
CNRS, Ecole Normale SupΓ©rieure, PSL
currently detached to Meta
Postdoc at Stanford | Developmental NeuroAI
Associate Professor in Psychology at Columbia, PI of https://www.dpmlab.org/
Postdoctoral researcher @NeuroSpin | AI π€ & neuroscienceΒ π§ Β enthusiast
https://linktr.ee/alirezakr
Neuroscience Professor at Harvard University. Personal account and posts here. Research group website: https://vnmurthylab.org.
The Johns Hopkins OneNeuro Initiative is building a university-wide neuroscience community to understand the brain - from molecules to mind.
Senior Lecturer (Associate Professor) in Natural Language Processing, Queen's University Belfast. NLProc β’ Cognitive Science β’ Semantics β’ Health Analytics.
Marie-Curie Postdoctoral Fellow with @dkaiserlab.bsky.social at Justus Liebig University GieΓen | studying visual perception, attention using fMRI,M/EEG, and computational models.
https://sites.google.com/view/lu-chun-yeh
Cognitive Science PhD student @ JHU
|| assistant prof at University of Montreal || leading the systems neuroscience and AI lab (SNAIL: https://www.snailab.ca/) π || associate academic member of Mila (Quebec AI Institute) || #NeuroAI || vision and learning in brains and machines
Cognitive computational neuroscientist.
Prev. IKW-UOS@DE, Donders@NLβ¬, βͺCIMeC@ITβ¬, IIT-B@IN
sushrutthorat.com
Neuroscientist at the University of Leeds, working on #EEGManyLabs, #rsatoolbox, co-founder mead.ac, tpc chair at ccneuro.org
interested in multi-sensory representations, semantics, neuroinformatics, open science
Cog comp neuro PhD at Johns Hopkins
π http://kelseyhan-jhu.github.io
Academic, cognitive & vision scientist, computational modeller, cofounder @neuromatch Academy, He/His. This is a personal account.
neuroscientist in Korea (co-director of IBS-CNIR) interested in how neuroimaging (e.g. fMRI or widefield optical imaging) can facilitate closed-loop causal interventions (e.g. neurofeedback, patterned stimulations). https://tinyurl.com/hakwan
Cognitive neuroscientist at UPenn. Interested in scenes, memory, space. Occasionally thinks about other things.
Computational vision. Deep learning. Center for Computational Brain Science @Brown University. Artificial and Natural Intelligence Toulouse Institute (France). European Laboratory for Learning and Intelligent Systems (ELLIS).