PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
Giacomo's commentary was in response to this great recent paper by Iqbal et al., also in PNAS:
"Biologically grounded neocortex computational primitives implemented on neuromorphic hardware improve vision transformer performance"
www.pnas.org/doi/10.1073/...
18.12.2025 15:22 β
π 5
π 2
π¬ 0
π 0
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
Enjoyed @giacomoi.bsky.social commentary in PNAS on how #NeuroAI and Neuromorphic Engineering should come together to allow brain circuit motifs to positively influence the design of computing systems:
Biological fidelity: The engine driving the neuromorphic renaissance
www.pnas.org/doi/full/10....
18.12.2025 15:22 β
π 13
π 3
π¬ 1
π 0
Let me know if you are in San Diego for #Neurips and wanted to chat about #compneuro / #neuroAI and neuroscience-inspired computing!
01.12.2025 18:54 β
π 6
π 0
π¬ 0
π 0
How do brain areas control each other? π§ ποΈ
β¨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.π§΅β¬οΈ
26.11.2025 19:32 β
π 90
π 30
π¬ 1
π 3
A beautiful summary of our paper! Thank you @neurosock.bsky.social
28.11.2025 07:58 β
π 37
π 9
π¬ 2
π 0
One example of how multiplexing might be implemented in the brain was shown in the great work by Thomas Akam with Dmitri Kullmann, a decade ago.
The papers are cited well but the general multiplexing idea never really took the field by storm as much as it deserved
www.nature.com/articles/nrn...
24.11.2025 10:07 β
π 29
π 5
π¬ 2
π 0
Research Fellowships
Our Mission to βincrease the means of industrial education and extend the influence of science and art upon productive industryβ Supplemental Charter of 1851
Postdoc fellowship opportunity for ECRs (<3 yrs post-PhD). Note that if you want to apply to work with me as your mentor, our dept has an internal deadline of Dec 4th so please email me asap. Our internal process is shorter than the full application. π€π§ π§ͺ
royalcommission1851.org/fellowships/...
21.11.2025 17:06 β
π 10
π 2
π¬ 1
π 0
Brain-Like Processing Pathways Form in Models With Heterogeneous Experts
Examples of such pathways can be found in the interactions between cortical and subcortical networks during learning, or in sub-networks specializing for task characteristics such as difficulty or mod...
This new model opens a whole new world of analysing multi region interaction across trials and tasks! More analysis and findings can be found in our paper linked below. Work lead by Jack Cook, and with great help from @danakarca.bsky.social and @somnirons.bsky.social !
arxiv.org/abs/2506.02813
21.11.2025 12:01 β
π 7
π 1
π¬ 0
π 0
We also find that while complex regions are needed to learn complex tasks, these tasks are eventually moved toward simpler regions, similar to how you may struggle the first time when learning a new skill, but slowly get better with practice.
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
Furthermore, we find that these pathways mirror our expected behavior of pathways in the brain! We find that difficult tasks need to be learned in more complex regions, similar to how you need to think βharderβ when learning how to solve a difficult math problem.
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
With these three features in place, we find that our third criterion of distinct pathways is also met. While baseline models exhibit largely random expert usage patterns, our models exhibit highly structured pathways between regions that reliably emerge during learning.
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
Our third contribution is expert dropout. Without this feature, we find models suffer large performance deficits when experts outside of the active pathway are disabled. However, we would want models to be primarily dependent on the experts that are most being used.
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
When put together, these two contributions resulted in remarkable pathway consistency in our model, which we measured by correlating the routing patterns across 10 different models trained on the same tasks.
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
We then identify three inductive biases that yield pathways that meet each of these criteria.
The first of these is a routing loss that penalizes the use of more complex experts during training, and the second scales this loss by the modelβs performance on the task being solved.
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
We then set three criteria to determine whether pathways had formed:
(1) Consistency: Models trained on the same tasks should have similar pathways
(2) Self-sufficiency: Pathways should be primarily reliant on their own experts
(3) Distinctness: Many distinct pathways should be present
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
We first needed to create a model in which we could study pathway formation. We chose a Heterogeneous Mixture-of-Experts architecture, in which information can be dynamically routed to computational experts, or regions, of varying sizes.
We train model on 82 tasks of varying complexity (ModCog)!
21.11.2025 12:01 β
π 1
π 0
π¬ 1
π 0
Brains have many pathways / subnetworks but which principles underlie their formation?
In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts modelsπ§΅
#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813
21.11.2025 12:01 β
π 37
π 12
π¬ 1
π 0
All good Dan!
14.11.2025 13:41 β
π 2
π 0
π¬ 0
π 0
Check out this cool new work lead by @pengfei-sun.bsky.social !
14.11.2025 10:26 β
π 2
π 1
π¬ 1
π 0
With my great advisors and colleagues, @achterbrain.bsky.social @zhe @danakarca.bsky.social @neural-reckoning.org, we show that if heterogeneous axonal delays (imprecise) can capture the essential temporal structure of a task, spiking networks do not need precise synaptic weights to perform well.
13.11.2025 20:51 β
π 22
π 10
π¬ 2
π 0
PhD in Data Science: Admissions Requirements | NYU CDS
Discover the PhD in Data Science requirements at NYU. Learn about deadlines, required degrees, coursework, and application details for Fall 2025 admissions.
ATTNπ¨: I will be looking for PhD students through NYU's Center for Data Science PhD program this year. Applicants should have an interest in either NeuroAI (specifically biological attention or AI interpretability) or ML for Remote Sensing. Visit my lab website for more info: lindsay-lab.github.io
24.08.2025 17:41 β
π 65
π 44
π¬ 0
π 2
The difficult constellation image is solved by generating candidate solutions with a GAN and refined using a genetic search conditioned on best fitting of the solution outline to the dots on the constellation image.
Could we understand vision as a type of problem-solving? In this new paper, we develop a computational model that iteratively refines the hypothesis about the visual input with evolutionary search.
www.biorxiv.org/content/10.1...
work led by @tarunkhajuria.bsky.social
#visionscience #neuroAI
25.08.2025 10:28 β
π 13
π 3
π¬ 0
π 0
This is really funnyβ¦
20.08.2025 12:43 β
π 14
π 1
π¬ 1
π 0
I find your point about probabilistic definition interesting -- never seen such a definition of it, but that could neatly link to my 'usefulness' framing, as for any sort of expected value computation you would need to take 'likelihood given context' into account.
20.08.2025 17:08 β
π 2
π 0
π¬ 0
π 0
Now the usefulness in program generation might sometimes align with policy compression, but that depends a lot on the given time horizon one assumes for the definition of 'usefulness'.
20.08.2025 17:08 β
π 2
π 0
π¬ 1
π 0
It also does not 100% align with my reading of it, but I found it an interesting angle. I think I find myself, naturally, being influenced by Alan Newell's take on it (which is the one John Duncan tends to reference), which is aimed at usefulness in program generation.
20.08.2025 17:08 β
π 2
π 0
π¬ 1
π 0
GVGAI-LLM: Evaluating Large Language Model Agents with Infinite Games
Can large language models play simple arcade games? Kind of. Sometimes. Slowly, and not as well as a simple search algorithm. And only if you format the input right. Of course, we made a benchmark to investigate this in more detail, because that's what we do. Paper here:
arxiv.org/html/2508.08...
18.08.2025 21:36 β
π 27
π 6
π¬ 2
π 0