I can only recommend @bernsteinneuro.bsky.social conference. Topics similar to cosyne, but feels so much nicer.
21.02.2026 22:46 β π 6 π 0 π¬ 1 π 0@j-b-eppler.bsky.social
Postdoc in Computational Neuroscience | CRM Barcelona Mostly interested in the mechanisms underlying learning, forgetting, memory formation, and most recently also creativity. And "representational drift".
I can only recommend @bernsteinneuro.bsky.social conference. Topics similar to cosyne, but feels so much nicer.
21.02.2026 22:46 β π 6 π 0 π¬ 1 π 0Most recent models explain "representational drift" as continuous learning in some way (e.g. doi.org/10.1038/s415..., doi.org/10.7554/eLif... or our recent doi.org/10.1073/pnas...). I don't like the name "representational drift" either, but I'm afraid, it's here for good...
13.02.2026 16:12 β π 1 π 0 π¬ 1 π 0While I agree that the paper is beautiful (and some aspects are remarkably stable), a median decoding error of nearly 90Β° after 25 days (Fig. 1k) or a |Ξ| preferred direction of 45Β° after 4 weeks hardly suggests that there is no drift. π
13.02.2026 16:08 β π 3 π 1 π¬ 0 π 0In the model, reproducing the empirical signal correlation β noise correlation relationship requires two things:
- A Hebbian component
AND
- A stochastic process
Drift emerges from the interplay between the two.
π§ π§ͺ 9/9
A scientific figure, showing that we can reproduce the described predictive effect of signal on noise correlations. But only via a combination of Hebbian plasticity and a stochastic process.
Fig. 6: Modeling the mechanism
Finally, the model!
So, we see Hebbian structure in the data. But is a Hebbian mechanism enough to explain the observed drift?
No.
π§ π§ͺ 8/9
One idea:
During intense learning, the balance shifts, perhaps to allow for consolidation, so that representations can reorganize or transfer to other areas of the brain.
π§ π§ͺ 7/9
A scientific figure, showing how during fear conditioning the stabilizing effect of signal correlation on noise correlation is diminished.
Fig. 5: Fear conditioning decreases Hebbian signature
During fear conditioning, the signal correlation β noise correlation relationship is dampened.
The Hebbian plasticity is weakened. During learning!
Why might that be?
π§ π§ͺ 6/9
A scientific figure, showing how noise correlation stability between consecutive imaging time points is growing with noise correlation on the first imaging time point. Vice versa there is no such effect.
Fig. 4: Signal correlation stabilizes noise correlation
Not only do signal correlations predict future noise correlation, they also predict noise correlation stability between t and t+1.
Stronger signal correlation β more stable noise correlation.
π§ π§ͺ 5/9
A scientific figure, showing that signal correlations at a given time point are predictive of noise correlations at a later time point (2 days apart). Vice versa this effect is very small.
Fig. 3: Hebbian plasticity during drift
Hereβs the first big result:
π Signal correlations at time t predict noise correlations at time t+1.
If two neurons co-activate now,
their future functional coupling rises.
This is the classic:
βFire together β wire together.β
π§ π§ͺ 4/9
A scientific figure, showing stable distributions of signal and noise correlations, but volatility of individual signal and noise correlations between imaging days1 and 3.
Fig. 2: A volatile steady state
Both signal and noise correlations appear to be in a stable distribution across daysβ¦
BUT on the level of individual pairs, both are highly volatile.
So at the population level it looks stable, yet at the pairwise level itβs highly dynamic.
π§ π§ͺ 3/9
A scientific figure, describing how we computed signal and noise correlations.
Fig. 1: Defining SC and NC
- Signal correlations (SC): co-active cells
- Noise correlations (NC): functional connectivity
For the SC/NC aficionados:
We compute SC from the median response and estimate both SC and NC via bootstrapping.
π At a single time point, SC and NC are uncorrelated.
π§ π§ͺ 2/9
As promised: a detailed figure-by-figure thread on our @pnas.org paper:
doi.org/10.1073/pnas...
We use signal correlations and noise correlations in chronic imaging data to show that representational drift is shaped by a balance between Hebbian and stochastic changes.
Letβs dive in π
π§ π§ͺ 1/9
Yes. We figured that quite a bit of this relation in the literature might stem from spurious correlations.
Sorry for not referencing your work. We might have used your method, if we had been aware of it. We discussed the SC / NC bias a lot and struggled quite a bit to find a solution.
Yes. Sorry, we had to shorten significantly in the end, so the detailed methods are only in the supplement.
04.02.2026 19:09 β π 0 π 0 π¬ 0 π 0Thanks a lot. This is very interesting.
We did not split the data into odd and even trials to compute signal correlations, but we subsampled random trials and averaged. So, we followed a similar approach. And in the end we see no correlation at all between SC and NC (within one imaging session).
Huge shoutout to all co-authors β€οΈ
And especially to co-first author Thomas - it was amazing to work with you on this project!
I will have a figure by figure thread on Sunday or early next week. If anyone has questions let me know so I can answer them then.
π§ͺπ§
5/5
But Hebbian learning alone isnβt enough. In a computational model, we find that to reproduce the observed drift, we also need a stochastic process, either in the inputs or in the network itself.
Representational drift emerges from a balance between stochastic changes and Hebbian learning.
π§ͺπ§
4/5
We find: during representational drift, SC at one time point predicts NC at a later time point, exactly what to expect during Hebbian learning.
Representational drift is not just passive instability.
It reflects ongoing Hebbian plasticity continuously reshaping effective connectivity.
π§ͺπ§
3/5
We analyze population recordings and use
β’ signal correlations (SC) as a proxy for co-active neurons and
β’ noise correlations (NC) as a proxy for effective connectivity
to track how activity and connectivity co-evolve over time during representational drift.
π§ͺπ§
2/5
New @pnas.org paper out π
βRepresentational drift reflects ongoing balancing of stochastic changes by Hebbian learningβ
π doi.org/10.1073/pnas...
What drives representational drift in neural populations? Hereβs the short version. π
π§ͺπ§
1/5
I wrote this song on Saturday, recorded it yesterday and released it to you today in response to the state terror being visited on the city of Minneapolis. Itβs dedicated to the people of Minneapolis, our innocent immigrant neighbors and in memory of Alex Pretti and Renee Good.
Stay free
At the very least, it is completely superfluous and can just be omitted.
11.11.2025 08:39 β π 0 π 0 π¬ 0 π 0How, then, would the same stimulus within one session result reliably in the same neuronal response?
14.10.2025 16:37 β π 1 π 0 π¬ 0 π 0Did you watch the video? Or read the article?
I have the feeling, you're talking about something completely different. We are talking "representational drift". Not movement detection. How would movement detection account for different responses to the same stimulus on different days?
Direct link to the video π₯
youtu.be/z63fmYSBcB0
And the excellent article the outreach team wrote for the CRM homepage:
www.crm.cat/why-your-bra...
@crmatematica.bsky.social
π§ π§ͺ
Loved working with our amazing outreach team on this short video about representational drift! @crmatematica.bsky.social
π§ π§ͺ
In it, I explain the points we make in our recent review in CONEUR:
doi.org/10.1016/j.co...
Iβll address this question in a minimal model, which Iβll present at the Bernstein Conference (29 Sep β 02 Oct) in Frankfurt.
If youβre interested, come to our workshop or visit my poster. Looking forward!
@bernsteinneuro.bsky.social
@crmatematica.bsky.social
π§ π§ͺ
My talk at the WWTNS is now online!
In it, I explore how both random processes and Hebbian learning shape representational drift:
www.youtube.com/watch?v=WH4P...
In the end I raise the question:
How can neuronal activities change while representational similarity is preserved?
π§ π§ͺ
This is the first paper to come out from my postdoc at @crmatematica.bsky.social.
Still in collaboration with my old friends Simon Rumpel and Matthias Kaschube, though. π§ π€
5/5
This process can be likened to "herding cats": the stochastic changes (cats) are kept in check by statistical learning (shepherd & dogs). We even added an illustration containing π± & πΆ. π§ π§ͺ
4/5