P4 52 βCoding Schemes in Non-Lazy Artificial Neural Networksβ by @avm.bsky.social
30.09.2025 09:29 β π 2 π 0 π¬ 0 π 0@gerstnerlab.bsky.social
The Laboratory of Computational Neuroscience @EPFL studies models of neurons, networks of neurons, synaptic plasticity, and learning in the brain.
P4 52 βCoding Schemes in Non-Lazy Artificial Neural Networksβ by @avm.bsky.social
30.09.2025 09:29 β π 2 π 0 π¬ 0 π 0WEDNESDAY 14:00 β 15:30
P4 25 βRarely categorical, always high-dimensional: how the neural code changes along the cortical hierarchyβ by @shuqiw.bsky.social
P4 35 βBiologically plausible contrastive learning rules with top-down feedback for deep networksβ by @zihan-wu.bsky.social
WEDNESDAY 12:30 β 14:00
P3 4 βToy Models of Identifiability for Neuroscienceβ by @flavioh.bsky.social
P3 55 βHow many neurons is βinfinitely manyβ? A dynamical systems perspective on the mean-field limit of structured recurrent neural networksβ by Louis Pezon
P2 65 βRate-like dynamics of spiking neural networksβ by Kasper Smeets
30.09.2025 09:29 β π 0 π 0 π¬ 1 π 0TUESDAY 18:00 β 19:30
P2 2 βBiologically informed cortical models predict optogenetic perturbationsβ by @bellecguill.bsky.social
P2 12 βHigh-precision detection of monosynaptic connections from extra-cellular recordingsβ by @shuqiw.bsky.social
Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Hereβs the list:
TUESDAY 16:30 β 18:00
P1 62 βMeasuring and controlling solution degeneracy across task-trained recurrent neural networksβ by @flavioh.bsky.social
New in @pnas.org: doi.org/10.1073/pnas...
We study how humans explore a 61-state environment with a stochastic region that mimics a βnoisy-TV.β
Results: Participants keep exploring the stochastic part even when itβs unhelpful, and novelty-seeking best explains this behavior.
#cogsci #neuroskyence
π "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 π
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
Work led by Martin Barry with the supervision of Wulfram Gerstner and Guillaume Bellec @bellecguill.bsky.social
04.09.2025 16:00 β π 0 π 0 π¬ 0 π 0In experiments (models & simulations), we showed how this approach supports stable retention of old tasks while learning new ones (split CIfar-100, ASCβ¦)
04.09.2025 16:00 β π 0 π 0 π¬ 1 π 0We designed a Bio-inspired Context-specific gating of plasticity and neuronal activity allowing for a drastic reduction in catastrophic forgetting.
We also show the capacity of our model of both forward and backward transfer! All of this thanks to the shared neuronal activity across tasks.
We designed a Gating/Availabilty model that detects selective neurons - most useful neuron for the task - during learning, shunt activity of the others (Gating) and decrease the learning rate of task selective neuron (Availability)
04.09.2025 16:00 β π 0 π 0 π¬ 1 π 0π§ βYou never forget how to ride a bikeβ, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
So happy to see this work out! π₯³
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! π₯°
Tweeprint on an earlier version: bsky.app/profile/modi... π§ π§ͺπ©βπ¬
Attending #CCN2025?
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. π§ π€
We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.
π§βπ¬π§ͺπ¬
Work lead by Valentin Schmutz (@bio-emergent.bsky.social), in collaboration with Johanni Brea and Wulfram Gerstner.
08.08.2025 15:25 β π 5 π 0 π¬ 0 π 0Is it possible to go from spikes to rates without averaging?
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
Excited to present at the PIMBAA workshop at #RLDM2025 tomorrow!
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.
Preprint (accepted but not yet published): osf.io/preprints/ps...
Stoked to be at RLDM! Curious how novelty and exploration are impacted by generalization across similar stimuli? Then don't miss my flash talk in the PIMBAA workshop (tmr at 10:30, E McNabb Theatre) or stop by my poster tmr (#74)! Looking forward to chat π€©
www.biorxiv.org/content/10.1...
Our new preprint π
09.06.2025 19:32 β π 30 π 6 π¬ 0 π 0Interested in high-dim chaotic networks? Ever wondered about the structure of their state space? @jakobstubenrauch.bsky.social has answers - from a separation of fixed points and dynamics onto distinct shells to a shared lower-dim manifold and linear prediction of dynamics.
10.06.2025 19:45 β π 13 π 2 π¬ 0 π 0Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner
theoreticalneuroscience.no/thn22
John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
A cool EPFL News article was written about our recent neurotheory paper on spikes vs rates!
Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...
Definitely more accessible than the original physics-style, 4.5-page letter π€
journals.aps.org/prl/abstract...
Super excited to see my PhD thesis featured by EPFL! π
actu.epfl.ch/news/learnin...
P.S.: There's even a French version of the article! It feels so fancy! π π¨βπ¨ π«π·
actu.epfl.ch/news/apprend...
New round of spike vs rate?
The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.
This is what's shown in the last paper of my PhD, out today in Physical Review Letters π tinyurl.com/4rprwrw5
Pre-print π§ π§ͺ
Is mechanism modeling dead in the AI era?
ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.
We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models
1/8