GerstnerLab's Avatar

GerstnerLab

@gerstnerlab.bsky.social

The Laboratory of Computational Neuroscience @EPFL studies models of neurons, networks of neurons, synaptic plasticity, and learning in the brain.

242 Followers  |  116 Following  |  13 Posts  |  Joined: 10.02.2025  |  1.606

Latest posts by gerstnerlab.bsky.social on Bluesky

P4 52 β€œCoding Schemes in Non-Lazy Artificial Neural Networks” by @avm.bsky.social

30.09.2025 09:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

WEDNESDAY 14:00 – 15:30

P4 25 β€œRarely categorical, always high-dimensional: how the neural code changes along the cortical hierarchy” by @shuqiw.bsky.social

P4 35 β€œBiologically plausible contrastive learning rules with top-down feedback for deep networks” by @zihan-wu.bsky.social

30.09.2025 09:29 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

WEDNESDAY 12:30 – 14:00

P3 4 β€œToy Models of Identifiability for Neuroscience” by @flavioh.bsky.social

P3 55 β€œHow many neurons is β€œinfinitely many”? A dynamical systems perspective on the mean-field limit of structured recurrent neural networks” by Louis Pezon

30.09.2025 09:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

P2 65 β€œRate-like dynamics of spiking neural networks” by Kasper Smeets

30.09.2025 09:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

TUESDAY 18:00 – 19:30

P2 2 β€œBiologically informed cortical models predict optogenetic perturbations” by @bellecguill.bsky.social

P2 12 β€œHigh-precision detection of monosynaptic connections from extra-cellular recordings” by @shuqiw.bsky.social

30.09.2025 09:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Here’s the list:

TUESDAY 16:30 – 18:00

P1 62 β€œMeasuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social

30.09.2025 09:29 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

New in @pnas.org: doi.org/10.1073/pnas...

We study how humans explore a 61-state environment with a stochastic region that mimics a β€œnoisy-TV.”

Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.

#cogsci #neuroskyence

28.09.2025 11:07 β€” πŸ‘ 95    πŸ” 36    πŸ’¬ 0    πŸ“Œ 3
Post image

πŸŽ‰ "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 πŸŽ‰

Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.

www.biorxiv.org/content/10.1...
1/2

19.09.2025 08:01 β€” πŸ‘ 91    πŸ” 21    πŸ’¬ 4    πŸ“Œ 2

Work led by Martin Barry with the supervision of Wulfram Gerstner and Guillaume Bellec @bellecguill.bsky.social

04.09.2025 16:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In experiments (models & simulations), we showed how this approach supports stable retention of old tasks while learning new ones (split CIfar-100, ASC…)

04.09.2025 16:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We designed a Bio-inspired Context-specific gating of plasticity and neuronal activity allowing for a drastic reduction in catastrophic forgetting.

We also show the capacity of our model of both forward and backward transfer! All of this thanks to the shared neuronal activity across tasks.

04.09.2025 16:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We designed a Gating/Availabilty model that detects selective neurons - most useful neuron for the task - during learning, shunt activity of the others (Gating) and decrease the learning rate of task selective neuron (Availability)

04.09.2025 16:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Context selectivity with dynamic availability enables lifelong continual learning β€œYou never forget how to ride a bike”, – but how is that possible? The brain is able to learn complex skills, stop the practice for years, learn other…

🧠 β€œYou never forget how to ride a bike”, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context

04.09.2025 16:00 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

So happy to see this work out! πŸ₯³
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! πŸ₯°

Tweeprint on an earlier version: bsky.app/profile/modi... 🧠πŸ§ͺπŸ‘©β€πŸ”¬

25.08.2025 16:18 β€” πŸ‘ 37    πŸ” 13    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Attending #CCN2025?
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. πŸ§ πŸ€–

We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.

πŸ§‘β€πŸ”¬πŸ§ͺπŸ”¬

13.08.2025 08:40 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each ne...

Work lead by Valentin Schmutz (@bio-emergent.bsky.social), in collaboration with Johanni Brea and Wulfram Gerstner.

08.08.2025 15:25 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
From Spikes To Rates
YouTube video by Gerstner Lab From Spikes To Rates

Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.

08.08.2025 15:25 β€” πŸ‘ 61    πŸ” 17    πŸ’¬ 2    πŸ“Œ 1
OSF

Excited to present at the PIMBAA workshop at #RLDM2025 tomorrow!
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.

Preprint (accepted but not yet published): osf.io/preprints/ps...

11.06.2025 20:09 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Representational similarity modulates neural and behavioral signatures of novelty Novelty signals in the brain modulate learning and drive exploratory behaviors in humans and animals. While the perceived novelty of a stimulus is known to depend on previous experience, the effect of...

Stoked to be at RLDM! Curious how novelty and exploration are impacted by generalization across similar stimuli? Then don't miss my flash talk in the PIMBAA workshop (tmr at 10:30, E McNabb Theatre) or stop by my poster tmr (#74)! Looking forward to chat 🀩

www.biorxiv.org/content/10.1...

11.06.2025 20:41 β€” πŸ‘ 20    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Our new preprint πŸ‘€

09.06.2025 19:32 β€” πŸ‘ 30    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

Interested in high-dim chaotic networks? Ever wondered about the structure of their state space? @jakobstubenrauch.bsky.social has answers - from a separation of fixed points and dynamics onto distinct shells to a shared lower-dim manifold and linear prediction of dynamics.

10.06.2025 19:45 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner

theoreticalneuroscience.no/thn22

John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social

07.12.2024 08:24 β€” πŸ‘ 33    πŸ” 5    πŸ’¬ 0    πŸ“Œ 2
Preview
Brain models draw closer to real-life neurons Researchers at EPFL have shown how rough, biological spiking neural networks can mimic the behavior of brain models called recurrent neural networks. The findings challenge traditional assumptions and...

A cool EPFL News article was written about our recent neurotheory paper on spikes vs rates!

Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...

Definitely more accessible than the original physics-style, 4.5-page letter πŸ€“
journals.aps.org/prl/abstract...

22.01.2025 16:04 β€” πŸ‘ 25    πŸ” 9    πŸ’¬ 2    πŸ“Œ 0
Preview
Learning from the unexpected A researcher at EPFL working at the crossroads of neuroscience and computational science has developed an algorithm that can predict how surprise and novelty affect behavior.

Super excited to see my PhD thesis featured by EPFL! πŸŽ“
actu.epfl.ch/news/learnin...

P.S.: There's even a French version of the article! It feels so fancy! 😎 πŸ‘¨β€πŸŽ¨ πŸ‡«πŸ‡·
actu.epfl.ch/news/apprend...

10.01.2025 14:29 β€” πŸ‘ 22    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0
Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each ne...

New round of spike vs rate?

The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.

This is what's shown in the last paper of my PhD, out today in Physical Review Letters πŸŽ‰ tinyurl.com/4rprwrw5

06.01.2025 16:45 β€” πŸ‘ 26    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Post image

Pre-print 🧠πŸ§ͺ
Is mechanism modeling dead in the AI era?

ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.

We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models

1/8

08.01.2025 16:33 β€” πŸ‘ 116    πŸ” 46    πŸ’¬ 4    πŸ“Œ 2

@gerstnerlab is following 20 prominent accounts