PFCinema strongly divided opinions among coauthors, but glad you appreciate it! π
25.09.2025 09:01 β π 1 π 0 π¬ 0 π 0@kristorpjensen.bsky.social
Computational neuroscientist || Postdoc with Tim Behrens || Sainsbury Wellcome Centre @ UCL
PFCinema strongly divided opinions among coauthors, but glad you appreciate it! π
25.09.2025 09:01 β π 1 π 0 π¬ 0 π 0Thanks Hannah! Look forward to hearing what you think, and I hope everything is going well in Germany!
25.09.2025 09:00 β π 0 π 0 π¬ 0 π 0Thanks!! The whole problem of learning is something we haven't tackled yet but are very interested in! We also think that animals probably learn abstractions even in physical space, because it simplifies the planning problem by reducing the effective search space!
25.09.2025 08:58 β π 0 π 0 π¬ 0 π 0Finally a big thanks to all of our co-authors Peter Doohan, @mathiassablemeyer.bsky.social, @sandra-neuro.bsky.social, @alonbaram.bsky.social, and Thomas Akam + everyone else who contributed through discussions, ideas, and feedback!
24.09.2025 09:52 β π 4 π 1 π¬ 0 π 0This has been a super fun project, and Iβm very excited for the coming years where we will test some of the ideas experimentally together with our many excellent colleagues at the @sainsburywellcome.bsky.social and Oxford!
8/8
We think PFC structures adaptive behaviour using these same principles. If true, it could provide a path towards a unified mechanistic understanding of cortical computations from the sensory periphery to high-level cognition!
7/8
What is most exciting to us is that the STA solves these tasks using attractor dynamics that resemble how visual cortex infers 'missing edges' from partial inputs, how language cortex infers meaning even if we miss a word or two, and how navigation circuits infer orientation and location.
6/8
RNNs trained to solve such 'PFC-like' tasks learn a solution that exactly mirrors the spacetime attractor in both representation, connectivity, and dynamics. They also reveal an elegant mechanism for rapid adaptation of a 'world model' to changing environments, without the need for plasticity!
5/8
It turns out the resulting 'spacetime attractor' (STA) network is particularly good at tasks where the environment changes on a fast timescale β and these are exactly the types of behaviour that we need PFC for!
4/8
We show that these representations can do much more than that. If you connect the different neural populations the right way, the resulting attractor network can infer the future! This allows the network to solve complex problems like planning using representations that we know exist in PFC.
3/8
It is increasingly clear from recent work in mice and monkeys that prefrontal cortex solves sequence memory tasks by using different populations of neurons to represent different elements of the sequence.
2/8
Iβm super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!
www.biorxiv.org/content/10.1...
Great paper by @jonathannicholas.bsky.social and @marcelomattar.bsky.social !
A related discussion we had in our lab recently is whether there exists convincing evidence that mice use episodic memory - refs are welcome if anyone knows relevant work!
Amazing work by Mehran, @sonjahofer.bsky.social, and colleagues, characterizing neural mechanisms underlying explore/exploit behaviours!
06.03.2025 21:47 β π 4 π 0 π¬ 0 π 0New paper from the lab!
Mathias SablΓ©-Meyer used behavior, fMRI and MEG to study the mental representation of geometric shapes (quadrilaterals ranging in regularity from squares and rectangles to random figures).
www.biorxiv.org/content/10.1...
For my first Bluesky post, I'm very excited to share a thread on our recent work with Mitra Javadzadeh, investigating how connections between cortical areas shape computations in the neocortex! [1/7] www.biorxiv.org/content/10.1...
31.01.2025 02:57 β π 19 π 11 π¬ 1 π 1663 days since the senseless tragedy that took An, we present a manuscript that reports some of the discoveries that she left us.
www.biorxiv.org/content/10.1...
Glad to hear it was useful!
28.12.2024 01:08 β π 1 π 0 π¬ 0 π 0(also just for those who see this post but don't find the correct other thread that resolves the original question: Xie et al. do use the smallest angle between subspaces, and the confusion arises from different definitions of 'first' principal angle)
24.12.2024 09:26 β π 2 π 0 π¬ 0 π 0If you're getting into the weeds anyways, it's worth noting that (i) just doing svds on noisy data actually yields biased estimates, and (ii) it turns out the subspaces are slightly correlated and this is expected from theory.
Ref: excellent work by Will Dorrell & co. (arxiv.org/abs/2410.06232)
From a brief look at their code, they call base matlab svd, which returns the singular values in descending order, corresponding to the angles in ascending order - so I think everything is correct!
24.12.2024 09:17 β π 0 π 0 π¬ 0 π 0www.science.org/doi/10.1126/...
This work on sequence working memory from Liping Wang's lab is super cool!
TLDR: when macaques remember a sequence, ~orthogonal neural subspaces in DLPFC store the identity of the item at each index of the sequence
I agree that the focus on these results has diminished in recent years. Possibly a result of the rise of ML, where the focus is more on algorithms and performance over qualitative behaviours? A more positive take is that we do incorporate this understanding by building on prior modelling work.
23.12.2024 17:45 β π 1 π 0 π¬ 0 π 0Thanks!!
I do think quite a lot of the work from behavioural psychology has carried over to cognitive/neuroscience - people still talk about Pavlovian & instrumental conditioning, effects like blocking, etc. These effects also heavily inspired early computational modelling (e.g. Rescola-Wagner).
For those who are interested, I also wrote a Colab notebook that implements some of these RL algorithms and reproduces all the figures from the review: colab.research.google.com/drive/1ZC4lR...
21.12.2024 17:59 β π 3 π 0 π¬ 0 π 0The NBDT reviewers and editor were great throughout the process and really helped improve the manuscript!
I am also very happy to publish this (i) freely available to everyone, and (ii) without any publishing companies profiting from the work of myself or the reviewers.
I wrote an introduction to RL for neuroscience last year that was just published in NBDT: tinyurl.com/5f58zdy3
This review aims to provide some intuition for and derivations of RL methods commonly used in systems neuroscience, ranging from TD learning through the SR to deep and distributional RL!