Marcus Ghosh's Avatar

Marcus Ghosh

@marcusghosh.bsky.social

Computational neuroscientist. Postdoctoral fellow @Imperial I-X, Funded by Schmidt Sciences. Working on multisensory integration with @neuralreckoning.bsky.social

763 Followers  |  260 Following  |  78 Posts  |  Joined: 15.11.2024  |  2.5286

Latest posts by marcusghosh.bsky.social on Bluesky

A diagram showing a maze with a gradient (of sensory cues) overlaid.

A diagram showing a maze with a gradient (of sensory cues) overlaid.

Weโ€™re excited about this work as it:

โญ Explores a fundamental question: how does structure sculpt function in artificial and biological networks?

โญ Provides new models (pRNNs), tasks (Multimodal mazes) and tools, in a pip-installable package:

github.com/ghoshm/Multi...

๐Ÿงต9/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A diagram showing how different architectures (circles) learn distinct input-sensitivities and memory dynamics.

A diagram showing how different architectures (circles) learn distinct input-sensitivities and memory dynamics.

Third, to explore why different circuits function differently, we measured 3 traits from every network.

We find that different architectures learn distinct sensitivities and memory dynamics which shape their function.

E.g. we can predict a networkโ€™s robustness to noise from its memory.

๐Ÿงต8/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A diagram comparing the sample efficiency (learning speed) of two architectures (shown in grey and blue), across 4 maze tasks.

A diagram comparing the sample efficiency (learning speed) of two architectures (shown in grey and blue), across 4 maze tasks.

Second, to isolate how each pathway changes network function, we compare pairs of circuits which differ by one pathway.

Across pairs, we find that pathways have context dependent effects.

E.g. here hidden-hidden connections decrease learning speed in one task but accelerate it in another.

๐Ÿงต7/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A diagram comparing the fitness (task performance) of all pRNN architectures to the fully recurrent architecture, across 4 types of maze environments.

A diagram comparing the fitness (task performance) of all pRNN architectures to the fully recurrent architecture, across 4 types of maze environments.

First, across tasks and functional metrics, many pRNN architectures perform as well as the fully recurrent architecture.

Despite having less pathways and as few as ยผ the number of parameters.

This shows that pRNNs are efficient, yet performant.

๐Ÿงต6/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We trained over 25,000 pRNNs on these tasks.

And measured their:
๐Ÿ“ˆ Fitness (task performance)
๐Ÿ’น Learning speed
๐Ÿ“‰ Robustness to various perturbations (e.g. increasing sensor noise)

From these data, we reach three main conclusions.

๐Ÿงต5/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A diagram showing 2D mazes with gradients of sensory cues.

A diagram showing 2D mazes with gradients of sensory cues.

To compare pRNN function, we introduce a set of multisensory navigation tasks we call *multimodal mazes*.

In these tasks, we simulate networks as agents with noisy sensors, which provide local clues about the shortest path through each maze.

We add complexity by removing cues or walls.

๐Ÿงต4/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A diagram showing a feedforward network, a fully recurrent network and 3 of the 126, partially recurrent, architectures between these two extremes.

A diagram showing a feedforward network, a fully recurrent network and 3 of the 126, partially recurrent, architectures between these two extremes.

This allows us to interpolate between:

Feedforward - with no additional pathways.
Fully recurrent - with all nine pathways.

We term the 126 architectures between these two extremes *partially recurrent neural networks* (pRNNs), as signal propagation can be bidirectional, yet sparse.

๐Ÿงต3/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A neural network model with: input, hidden and output nodes, and 9 weight matrices.

A neural network model with: input, hidden and output nodes, and 9 weight matrices.

We start from an artificial neural network with 3 sets of units and 9 possible weight matrices (or pathways).

By keeping the two feedforward pathways (W_ih, W_ho) and adding the other 7 in any combination,

we can generate 2^7 distinct architectures.

All 128 are shown in the post above.

๐Ÿงต2/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A diagram showing 128 neural network architectures.

A diagram showing 128 neural network architectures.

How does the structure of a neural circuit shape its function?

@neuralreckoning.bsky.social & I explore this in our new preprint:

doi.org/10.1101/2025...

๐Ÿค–๐Ÿง ๐Ÿงช

๐Ÿงต1/9

01.08.2025 08:26 โ€” ๐Ÿ‘ 63    ๐Ÿ” 24    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3

Preprint update: The new version of #SPARKS๐ŸŽ‡ is out!
Everything's in here: sparks.crick.ac.uk
A thread on what changed ๐Ÿงต๐Ÿ‘‡
@flor-iacaruso.bsky.social @sdrsd.bsky.social @alexegeaweiss.bsky.social
#neuroskyence #NeuroAI #ML #BioInspiredAI

31.07.2025 10:33 โ€” ๐Ÿ‘ 8    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Happy to discuss what you disagree with?

25.07.2025 16:15 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Description Please note that job descriptions are not exhaustive, and you may be asked to take on additional duties that align with the key responsibilities ment...

Hiring a post-doc at Imperial in EEE. Broad in scope + flexible on topics: neural networks & new AI accelerators from a HW/SW co-design perspective!

w/ @neuralreckoning.bsky.social @achterbrain.bsky.social in Intelligent Systems and Networks group.

Plz share! ๐Ÿš€: www.imperial.ac.uk/jobs/search-...

25.07.2025 13:27 โ€” ๐Ÿ‘ 14    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Aw thanks!

As I mentioned in the talk, this is my favourite figure!

Though, sadly, it has been confined to the supplement of the upcoming paper.

25.07.2025 12:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

10. Keep your AIm in mind (๐ŸŽฏ)

As a scientist your focus should be on generating insights and understanding, not models with an extra percentage or two in accuracy!

With that in mind, less performant, but more interpretable models may be preferable.

25.07.2025 10:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

9. Aim for an interpretable, trustworthy model (๐Ÿค–)

By using methods from explainable AI, we can try to understand why models may make specific predictions.

This can improve trust. Though, remains an open research problem!

25.07.2025 10:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

8. Add what you know into your model (๐Ÿฆพ)

While many AI methods learn from scratch, incorporating prior knowledge (such as physical laws or symmetries) can help and there are a range of techniques for doing this!

25.07.2025 10:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

7. Start with synthetic data (๐Ÿงฎ)

If your model is not working your problem could be:
* Your model (your code + hyperparameters)
* Your data

To resolve this, generate some simple data (e.g. Gaussian points), if your model can't handle data like these, it probably won't work on real data!

25.07.2025 10:58 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

6. Start small and simple (๐Ÿฃ)

Many problems don't require a complex model.

Try:
* Establishing a baseline - e.g. guessing randomly or always guessing the mean
* Simple methods - e.g. linear regression
* Then, if necessary, increasingly complex models

25.07.2025 10:58 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
FAIR Principles - GO FAIR In 2016, the โ€˜FAIR Guiding Principles for scientific data management and stewardshipโ€™ย were published inย Scientific Data. The authors intended to provide guidelines to improve the Findability, Accessib...

5. Be FAIR (๐Ÿค)

Make sure your work is openly available and easy to reuse.

For example:
* Use @github.com to share code
* @conda.org to share the packages your code needs
* Zenodo to share data and models

www.go-fair.org/fair-princip...

25.07.2025 10:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
The Good Research Code Handbook This handbook is for grad students, postdocs and PIs who do a lot of programming as part of their research. It will teach you, in a practical manner, how to organize your code so that it is easy to...

4. Invest time in your code (๐Ÿฆ„)

It will improve the quality and reproducibility of your work and save you time in the long run!

We recommend following @patrickmineault.bsky.social's excellent Good Research Code Handbook:
goodresearch.dev/index.html

25.07.2025 10:58 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

3. Don't reinvent the wheel (๐Ÿ›ž)

Most code you need already exists!

Use standard packages (e.g. @scikit-learn.org and @pytorch.org) as much as possible.

And if you are short on data or compute, consider building on existing (pre-trained) models (e.g. @hf.co).

25.07.2025 10:58 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

2. Learn some terminology (๐Ÿ—ฃ๏ธ)

At first many terms, in papers, talks etc, will seem opaque and confusing.

Getting familiar with these will help your understanding!

We provide a glossary of terms for reference, but really the best way is to read, listen and join seminars or reading groups.

25.07.2025 10:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

1. Frame your scientific question (๐Ÿ–ผ๏ธ)

Before diving into research, you need to consider your aim and any data you may have.

This will help you to focus on relevant methods and consider if AI methods will be helpful at all.

@scikit-learn.org provide a great map along these lines!

25.07.2025 10:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

How can we best use AI in science?

Myself and 9 other research fellows from @imperial-ix.bsky.social use AI methods in domains from plant biology (๐ŸŒฑ) to neuroscience (๐Ÿง ) and particle physics (๐ŸŽ‡).

Together we suggest 10 simple rules @plos.org ๐Ÿงต

doi.org/10.1371/jour...

25.07.2025 10:58 โ€” ๐Ÿ‘ 46    ๐Ÿ” 14    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond fi...

New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social).

arxiv.org/abs/2507.16043

Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! ๐Ÿงต๐Ÿ‘‡

๐Ÿค–๐Ÿง ๐Ÿงช

24.07.2025 17:03 โ€” ๐Ÿ‘ 40    ๐Ÿ” 16    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Had a great time discussing multisensory integration @imrf.bsky.social!

And really enjoyed sharing our new work too

21.07.2025 08:14 โ€” ๐Ÿ‘ 20    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Neuroscience for machine learners A freely available short course on neuroscience for people with a machine learning background. Designed by Dan Goodman and Marcus Ghosh.

@neuralreckoning.bsky.social & I have an online course (videos, text, code)! Neuroscience for those with a machine learning background

neuro4ml.github.io

16.07.2025 20:38 โ€” ๐Ÿ‘ 25    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Modelling Audio-Visual Reaction Time with Recurrent Mean-Field Networks Understanding how the brain integrates multisensory information during detection and decision-making remains an active area of research. While many inferences have been drawn about behavioural outcome...

Itโ€™s been a minute (11 years) since my last @imrf.bsky.social
Iโ€™m excited to seeing all the great research.
And Iโ€™m delighted to give a talk on Friday about Rebecca Bradyโ€™s PhD new modelling studies in collab w @bizleylab.bsky.social and @jennycampos.bsky.social
1/2
www.biorxiv.org/content/10.1...

15.07.2025 10:34 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Off to my first @imrf.bsky.social conference!

I'll be giving a talk on Friday (talk session 9) on multisensory network architectures - new work from me & @neuralreckoning.bsky.social.

But say hello or DM me before then!

15.07.2025 09:22 โ€” ๐Ÿ‘ 10    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image Post image Post image Post image

Had a great time teaching on the @trendcamina.bsky.social summer school in Zambia!

Awesome students and super TAs:
@akoumoundourou.bsky.social, @tomnotgeorge.bsky.social, @jsoldadomagraner.bsky.social, @ashvparker.bsky.social, @pollytur.bsky.social, @skuechenhoff.bsky.social

14.07.2025 14:44 โ€” ๐Ÿ‘ 11    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@marcusghosh is following 20 prominent accounts