A diagram showing a maze with a gradient (of sensory cues) overlaid.
Weโre excited about this work as it:
โญ Explores a fundamental question: how does structure sculpt function in artificial and biological networks?
โญ Provides new models (pRNNs), tasks (Multimodal mazes) and tools, in a pip-installable package:
github.com/ghoshm/Multi...
๐งต9/9
01.08.2025 08:26 โ ๐ 7 ๐ 0 ๐ฌ 0 ๐ 0
A diagram showing how different architectures (circles) learn distinct input-sensitivities and memory dynamics.
Third, to explore why different circuits function differently, we measured 3 traits from every network.
We find that different architectures learn distinct sensitivities and memory dynamics which shape their function.
E.g. we can predict a networkโs robustness to noise from its memory.
๐งต8/9
01.08.2025 08:26 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 0
A diagram comparing the sample efficiency (learning speed) of two architectures (shown in grey and blue), across 4 maze tasks.
Second, to isolate how each pathway changes network function, we compare pairs of circuits which differ by one pathway.
Across pairs, we find that pathways have context dependent effects.
E.g. here hidden-hidden connections decrease learning speed in one task but accelerate it in another.
๐งต7/9
01.08.2025 08:26 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 0
A diagram comparing the fitness (task performance) of all pRNN architectures to the fully recurrent architecture, across 4 types of maze environments.
First, across tasks and functional metrics, many pRNN architectures perform as well as the fully recurrent architecture.
Despite having less pathways and as few as ยผ the number of parameters.
This shows that pRNNs are efficient, yet performant.
๐งต6/9
01.08.2025 08:26 โ ๐ 8 ๐ 0 ๐ฌ 1 ๐ 0
We trained over 25,000 pRNNs on these tasks.
And measured their:
๐ Fitness (task performance)
๐น Learning speed
๐ Robustness to various perturbations (e.g. increasing sensor noise)
From these data, we reach three main conclusions.
๐งต5/9
01.08.2025 08:26 โ ๐ 6 ๐ 0 ๐ฌ 1 ๐ 0
A diagram showing 2D mazes with gradients of sensory cues.
To compare pRNN function, we introduce a set of multisensory navigation tasks we call *multimodal mazes*.
In these tasks, we simulate networks as agents with noisy sensors, which provide local clues about the shortest path through each maze.
We add complexity by removing cues or walls.
๐งต4/9
01.08.2025 08:26 โ ๐ 8 ๐ 0 ๐ฌ 1 ๐ 0
A diagram showing a feedforward network, a fully recurrent network and 3 of the 126, partially recurrent, architectures between these two extremes.
This allows us to interpolate between:
Feedforward - with no additional pathways.
Fully recurrent - with all nine pathways.
We term the 126 architectures between these two extremes *partially recurrent neural networks* (pRNNs), as signal propagation can be bidirectional, yet sparse.
๐งต3/9
01.08.2025 08:26 โ ๐ 7 ๐ 0 ๐ฌ 1 ๐ 0
A neural network model with: input, hidden and output nodes, and 9 weight matrices.
We start from an artificial neural network with 3 sets of units and 9 possible weight matrices (or pathways).
By keeping the two feedforward pathways (W_ih, W_ho) and adding the other 7 in any combination,
we can generate 2^7 distinct architectures.
All 128 are shown in the post above.
๐งต2/9
01.08.2025 08:26 โ ๐ 8 ๐ 0 ๐ฌ 1 ๐ 0
A diagram showing 128 neural network architectures.
How does the structure of a neural circuit shape its function?
@neuralreckoning.bsky.social & I explore this in our new preprint:
doi.org/10.1101/2025...
๐ค๐ง ๐งช
๐งต1/9
01.08.2025 08:26 โ ๐ 63 ๐ 24 ๐ฌ 2 ๐ 3
Preprint update: The new version of #SPARKS๐ is out!
Everything's in here: sparks.crick.ac.uk
A thread on what changed ๐งต๐
@flor-iacaruso.bsky.social @sdrsd.bsky.social @alexegeaweiss.bsky.social
#neuroskyence #NeuroAI #ML #BioInspiredAI
31.07.2025 10:33 โ ๐ 8 ๐ 4 ๐ฌ 1 ๐ 1
Happy to discuss what you disagree with?
25.07.2025 16:15 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Description
Please note that job descriptions are not exhaustive, and you may be asked to take on additional duties that align with the key responsibilities ment...
Hiring a post-doc at Imperial in EEE. Broad in scope + flexible on topics: neural networks & new AI accelerators from a HW/SW co-design perspective!
w/ @neuralreckoning.bsky.social @achterbrain.bsky.social in Intelligent Systems and Networks group.
Plz share! ๐: www.imperial.ac.uk/jobs/search-...
25.07.2025 13:27 โ ๐ 14 ๐ 9 ๐ฌ 2 ๐ 1
Aw thanks!
As I mentioned in the talk, this is my favourite figure!
Though, sadly, it has been confined to the supplement of the upcoming paper.
25.07.2025 12:39 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
10. Keep your AIm in mind (๐ฏ)
As a scientist your focus should be on generating insights and understanding, not models with an extra percentage or two in accuracy!
With that in mind, less performant, but more interpretable models may be preferable.
25.07.2025 10:58 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0
9. Aim for an interpretable, trustworthy model (๐ค)
By using methods from explainable AI, we can try to understand why models may make specific predictions.
This can improve trust. Though, remains an open research problem!
25.07.2025 10:58 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
8. Add what you know into your model (๐ฆพ)
While many AI methods learn from scratch, incorporating prior knowledge (such as physical laws or symmetries) can help and there are a range of techniques for doing this!
25.07.2025 10:58 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
7. Start with synthetic data (๐งฎ)
If your model is not working your problem could be:
* Your model (your code + hyperparameters)
* Your data
To resolve this, generate some simple data (e.g. Gaussian points), if your model can't handle data like these, it probably won't work on real data!
25.07.2025 10:58 โ ๐ 6 ๐ 0 ๐ฌ 1 ๐ 0
6. Start small and simple (๐ฃ)
Many problems don't require a complex model.
Try:
* Establishing a baseline - e.g. guessing randomly or always guessing the mean
* Simple methods - e.g. linear regression
* Then, if necessary, increasingly complex models
25.07.2025 10:58 โ ๐ 8 ๐ 0 ๐ฌ 1 ๐ 1
The Good Research Code Handbook
This handbook is for grad students, postdocs and PIs who do a lot of programming as part of their research. It will teach you, in a practical manner, how to organize your code so that it is easy to...
4. Invest time in your code (๐ฆ)
It will improve the quality and reproducibility of your work and save you time in the long run!
We recommend following @patrickmineault.bsky.social's excellent Good Research Code Handbook:
goodresearch.dev/index.html
25.07.2025 10:58 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0
3. Don't reinvent the wheel (๐)
Most code you need already exists!
Use standard packages (e.g. @scikit-learn.org and @pytorch.org) as much as possible.
And if you are short on data or compute, consider building on existing (pre-trained) models (e.g. @hf.co).
25.07.2025 10:58 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0
2. Learn some terminology (๐ฃ๏ธ)
At first many terms, in papers, talks etc, will seem opaque and confusing.
Getting familiar with these will help your understanding!
We provide a glossary of terms for reference, but really the best way is to read, listen and join seminars or reading groups.
25.07.2025 10:58 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
1. Frame your scientific question (๐ผ๏ธ)
Before diving into research, you need to consider your aim and any data you may have.
This will help you to focus on relevant methods and consider if AI methods will be helpful at all.
@scikit-learn.org provide a great map along these lines!
25.07.2025 10:58 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0
How can we best use AI in science?
Myself and 9 other research fellows from @imperial-ix.bsky.social use AI methods in domains from plant biology (๐ฑ) to neuroscience (๐ง ) and particle physics (๐).
Together we suggest 10 simple rules @plos.org ๐งต
doi.org/10.1371/jour...
25.07.2025 10:58 โ ๐ 46 ๐ 14 ๐ฌ 2 ๐ 0
Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks
We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond fi...
New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social).
arxiv.org/abs/2507.16043
Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! ๐งต๐
๐ค๐ง ๐งช
24.07.2025 17:03 โ ๐ 40 ๐ 16 ๐ฌ 1 ๐ 1
Had a great time discussing multisensory integration @imrf.bsky.social!
And really enjoyed sharing our new work too
21.07.2025 08:14 โ ๐ 20 ๐ 1 ๐ฌ 1 ๐ 0
Modelling Audio-Visual Reaction Time with Recurrent Mean-Field Networks
Understanding how the brain integrates multisensory information during detection and decision-making remains an active area of research. While many inferences have been drawn about behavioural outcome...
Itโs been a minute (11 years) since my last @imrf.bsky.social
Iโm excited to seeing all the great research.
And Iโm delighted to give a talk on Friday about Rebecca Bradyโs PhD new modelling studies in collab w @bizleylab.bsky.social and @jennycampos.bsky.social
1/2
www.biorxiv.org/content/10.1...
15.07.2025 10:34 โ ๐ 5 ๐ 2 ๐ฌ 1 ๐ 0
Off to my first @imrf.bsky.social conference!
I'll be giving a talk on Friday (talk session 9) on multisensory network architectures - new work from me & @neuralreckoning.bsky.social.
But say hello or DM me before then!
15.07.2025 09:22 โ ๐ 10 ๐ 3 ๐ฌ 0 ๐ 1
Mood & Memory researcher with a computational bent. https://www.nicolecrust.com/ Science advocate. Professor (UPenn Psychology). NEW BOOK: Elusive Cures. https://press.princeton.edu/books/hardcover/9780691243054/elusive-cures
Luiz Pessoa, University of Maryland, College Park
Neuroscientist interested in cognitive-emotional brain
Author of The Entangled Brain, MIT Press, 2022
Author of The Cogitive-Emotional Brain, MIT Press, 2013
Neuroscience & Philosophy Salon (YouTube)
Computational neuroscience PhD student at Boston University (advisors Cynthia Bradham & Gabe Ocker).
Mathematically modeling embryonic neurodevelopment
I write trippy scifi & mathfiction.
Ignyte Award Finalist 2025.
6 stories in Clarkesworld
26 yrs old
Computational neuroscientist interested in how we learn, and dad to twin boys
Asst prof at Baylor College of Medicine
https://www.henniglab.org/
Neuroscientist at the Crick studying cortical microcircuits and cell types.
www.znamlab.org
Asst. Prof. Of Neuroscience at Rutgers University- Newark.
Computational Cognitive Scientist ๐ง ๐ค โข NeuroAI, Predictive Coding, RL & Deep Learning, Complex Systems โข Postdoc at @siegellab.bsky.social, @unituebingen.bsky.social โข Husband & Dad
๐ https://scholar.google.com/citations?hl=en&user=k5eR8_oAAAAJ
Postdoctoral Researcher, working at the intersection of AI ๐ป and neurosciences ๐ง @UKEHamburg
Previously @etislab and @cbcUPF
https://sites.google.com/view/raphael-bergoin/home
Computational neuroscientist studying learning and memory in health and disease. Dad, yogi, Assistant Professor at Rutgers University.
Neuroscience, engineering, AI, music. Asst. Professor / PI at University of Montrรฉal and Mila.
Professor of Cognitive Neuroscience.
Co-director of York Neuroimaging Centre (YNiC).
Interested in memory, spatial navigation and brain imaging.
He/Him
http://www.aidanhorner.org/
Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.
Psychologist and neuroscientist at UCL https://metacoglab.org
Author, Know Thyself (2021) https://metacoglab.org/book
Dad and assistant to the Diplomat
Prof_Emeritus Zoology Cambridge. Circuits, synapses, coding, adaptation, benefits ,costs, constraints, efficiency, energy, noise, eye design, bassoon
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / ็คพไผ็งๅญฆ
Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com
How do we move? I study brains and machines at York University (Assistant Professor). Full-time human.
Neural Control & Computation Lab
www.ncclab.ca
Self cognition | computational modelling | meta-science, open science, diversity| amateur climber. @School of Psychology, Nanjing Normal University
Researcher at Imperial College London. Reduced-order models, machine learning and fluid dynamics. Views are my own.