Alessandro Galloni's Avatar

Alessandro Galloni

@argalloni.bsky.social

Computational neuroscience postdoc in the Milstein Lab at Rutgers University, studying synaptic plasticity, bio-plausible deep learning / neuroAI, neuromorphic computing. Previously @ Francis Crick Institute & UCL

411 Followers  |  289 Following  |  52 Posts  |  Joined: 14.08.2024  |  2.1007

Latest posts by argalloni.bsky.social on Bluesky

Dear universities,

I am begging you to stop requiring letters of recommendation for master's programmes. You and I both know you don't read them, so stop asking for them.

Instead, have applicants list a name and get in touch if it's a borderline case.

Signed,
Everyone.

28.07.2025 18:28 β€” πŸ‘ 533    πŸ” 85    πŸ’¬ 16    πŸ“Œ 24

Also, even if the theory is wrong/overly simplistic, putting it in terms of concrete physiological variables could at least still be useful for coming up with ideas on which non-obvious experiments to do next. Might find evidence for *one* role of cell type X, even if not the only role

28.07.2025 15:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond fi...

New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social).

arxiv.org/abs/2507.16043

Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! πŸ§΅πŸ‘‡

πŸ€–πŸ§ πŸ§ͺ

24.07.2025 17:03 β€” πŸ‘ 40    πŸ” 16    πŸ’¬ 1    πŸ“Œ 1

Exciting new preprint from the lab: β€œAdopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168

08.07.2025 13:03 β€” πŸ‘ 124    πŸ” 55    πŸ’¬ 3    πŸ“Œ 10

Definitely, with these short message formats words often come across as harsher/more direct than we intend. The tone of voice sounds friendly in our heads, but doesn't always read that way

11.07.2025 21:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(these bsky/xitter debates seem to devolve into semantics suspiciously often :P)

11.07.2025 16:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Aside from the needless aggression, I've found reading this conversation quite enlightening :) Didn't realize there was such ambiguity around the term "mean-field". Personally I associate it more with the single-neuron averaging (~=ANN), but I think the population averaging view is equally valid

11.07.2025 16:03 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I was thinking something even more basic, like the work that has been done on how individual receptive fields change when animals are reared in strange environments

11.07.2025 13:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The closest I can think of is something like the efficient coding hypothesis, where we can make a theory about representation that would be optimal for encoding sensory info in a given environment, and predict how this should change when sensory statistics are altered

11.07.2025 13:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I’m reminded of one of my favorite quotes from Eve Marder: "The process of turning a word model into a formal mathematical model invariably forces the experimentalist to confront his or her hidden assumptions."

The point isn’t to capture all the richness, it’s to put your money where your mouth is.

07.07.2025 11:35 β€” πŸ‘ 66    πŸ” 15    πŸ’¬ 3    πŸ“Œ 1
Post image

(1/7) New preprint from Rajan lab! πŸ§ πŸ€–
@ryanpaulbadman1.bsky.social & Riley Simmons-Edler show–through cog sci, neuro & ethology–how an AI agent with fewer β€˜neurons’ than an insect can forage, find safety & dodge predators in a virtual world. Here's what we built

Preprint: arxiv.org/pdf/2506.06981

02.07.2025 18:33 β€” πŸ‘ 88    πŸ” 31    πŸ’¬ 3    πŸ“Œ 2

But in this example, wouldn't you say that this neuron doesn't actually represent either fish or faces, but rather it represents some feature that is present in both? So it does have a representation, just not one that can be simplistically described by a single word

05.06.2025 19:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In proportion to the number of people that make use of the dataset. If it drives lots of follow-on science, then we should continue investing to collect such datasets. If we hear crickets, then maybe wait before collecting the next connectome.

04.06.2025 23:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But I generally agree that a blind copy/paste approach won't get us very far. I still think it's a useful dataset to have, and gathering more connectome data is a good thing. Copying every detail is probably a bad strategy, but copying *some* might prove useful, we shouldn't dismiss prematurely

04.06.2025 17:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Useful to understanding the brain, even if not useful to building better LLMs. Might be useful for "AI" approaches that deviate from the current mainstream (e.g. neuromorphic computing), too early to say. Strong claims that something will never work tend to not age very well

04.06.2025 17:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Obviously we can't just plug-and-play and expect to outperform an LLM. But this take seems overly dismissive of a promising approach that is only just starting to be explored. Bio architectures may yet prove useful for select applications (e.g. where we have specific hardware constraints)

04.06.2025 14:34 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I find the arguments somewhat unconvincing. To start, advancing AI is not the only goal. Second, just because there is degeneracy and we don't know everything about every molecular interaction doesn't mean that knowing the general architecture is not useful and important first step.

04.06.2025 14:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And yet most humanoid robots are pretty bad at basic movements. I suspect that when it comes to interacting with the physical world, the hardware/energy/latency constraints are so different from what DL/GPUs are optimized for that bio-architectures and neuromorphic hardware might prove more useful

04.06.2025 14:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm curious if you think the same applies to robotics, where DL progress has been more limited. E.g. I found recent work from @neurograce.bsky.social's lab quite inspiring and seems like a good counter-example where the connectome alone gives a useful architecture for learning relevant movements

04.06.2025 14:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The recent work I presented at Cosyne this year is now out on bioRxiv! Combining bio-plausible credit assignment using (apical) dendrites, realistic E/I cell types, and BTSP-based synaptic plasticity. One small step closer to understanding learning in the brain! Check out the summary here πŸ‘‡

29.05.2025 00:59 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Motor learning refines thalamic influence on motor cortex - Nature Imaging and optogenetics in mice provide insight into the interplay between the primary motor cortex and the motor thalamus during learning, showing that thalamic inputs have a key role in the executi...

Our paper is out in Nature.

By examining various inputs to the motor cortex during learning, we found that thalamic inputs learn to activate the cortical neurons encoding the movement being learned.

Tour de force by Assaf in collab with Felix and Marcus. Congrats!

www.nature.com/articles/s41...

08.05.2025 00:12 β€” πŸ‘ 161    πŸ” 43    πŸ’¬ 3    πŸ“Œ 0

It can be a bit intimidating the first time you go (especially if you don't know many people in the field), but during the conference itself I also met loads of people who were really interested in how cellular properties can affect systems-level function.

06.05.2025 20:08 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I highly recommend giving it a shot. In 2018 I had a poster accepted that was heavily cellular (dendrites, single-cell biophysical modelling, slice patch-clamp experiments). In my application I just emphasised the systems/computational relevance. If you DM me I'm happy to share my old submission

06.05.2025 19:54 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is an amazing set of experiments, we're clearly living in the golden age of connectomics and structure/function mapping. Congrats to the whole team!

01.05.2025 15:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β€œMy job is to carve off a sliver of the ineffable, and to eff it.”
www.experimental-history.com/p/28-slightl...

30.04.2025 12:07 β€” πŸ‘ 9    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0


Not too long ago, we were asked when we're going to replace Wikipedia's human-curated knowledge with AI. 

The answer? We're not.

The community of volunteers behind Wikipedia is the most important and unique element of Wikipedia’s success. For nearly 25 years, Wikipedia editors have researched, deliberated, discussed, built consensus, and collaboratively written the largest encyclopedia humankind has ever seen. Their care and commitment to reliable encyclopedic knowledge is something AI cannot replace. 

That is why our new AI strategy doubles down on the volunteers behind Wikipedia.

We will use AI to build features that remove technical barriers to allow the humans at the core of Wikipedia to spend their valuable time on what they want to accomplish, and not on how to technically achieve it. Our investments will be focused on specific areas where generative AI excels, all in the service of creating unique opportunities that will boost Wikipedia’s volunteers:

Not too long ago, we were asked when we're going to replace Wikipedia's human-curated knowledge with AI. The answer? We're not. The community of volunteers behind Wikipedia is the most important and unique element of Wikipedia’s success. For nearly 25 years, Wikipedia editors have researched, deliberated, discussed, built consensus, and collaboratively written the largest encyclopedia humankind has ever seen. Their care and commitment to reliable encyclopedic knowledge is something AI cannot replace. That is why our new AI strategy doubles down on the volunteers behind Wikipedia. We will use AI to build features that remove technical barriers to allow the humans at the core of Wikipedia to spend their valuable time on what they want to accomplish, and not on how to technically achieve it. Our investments will be focused on specific areas where generative AI excels, all in the service of creating unique opportunities that will boost Wikipedia’s volunteers:

Wikimedia has a new AI strategy!

A colleague and I spent months working on it. I am so happy that it is out. wikimediafoundation.org/news/2025/04...

30.04.2025 13:57 β€” πŸ‘ 201    πŸ” 58    πŸ’¬ 5    πŸ“Œ 8
Preview
Neuromorphic Questionnaire This form collects valuable information from the Neuromorphic Community as part of a project led by Matteo Saponati, Laura Kriener, Sebastian Billaudelle, Filippo Moro, and Melika Payvand. The goal is...

Take our short 5-min anonymous survey on the Neuromorphic field’s current state & future:

πŸ“‹ tinyurl.com/3jkszrnr
πŸ—“οΈ Open until May 12, 2025

Results will be shared openly and submitted for publication. Your input will help us understand howΒ interdisciplinary trendsΒ are shaping the field.

16.04.2025 10:27 β€” πŸ‘ 8    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0

And just today the EIC of Nature Neuroscience told me in public that they are a for-profit company and if they had to pay reviewers they would "need" to raise prices.

24.04.2025 05:58 β€” πŸ‘ 22    πŸ” 4    πŸ’¬ 2    πŸ“Œ 1

I am teaching my phd writing workshop course this quarter, question: are there any words/phrases said to you by an advisor/mentor that stuck with you, were memorable, or particularly helpful? If so please reply below!

23.04.2025 19:45 β€” πŸ‘ 658    πŸ” 149    πŸ’¬ 347    πŸ“Œ 65

@argalloni is following 20 prominent accounts