Xiao-Xiong Lin's Avatar

Xiao-Xiong Lin

@xxlin.bsky.social

Computational and systems neuroscience. DeepRL Hippocampus navigation <- Data analysis / neural network modelling PFC working memory flexible cognition. xiaoxionglin.com https://www.bcf.uni-freiburg.de/about/people/lin github.com/xiaoxionglin/dSCA

48 Followers  |  109 Following  |  39 Posts  |  Joined: 03.11.2023  |  1.8331

Latest posts by xxlin.bsky.social on Bluesky

I guess in comparison, our work flips the narrative around, where sequential connection (no matter how/why that's achieved, e.g. optimizing prediction, or prewired) simply gives rise to intrinsic sequences. And the place cells, even in the online mode, is a result of it.

29.01.2026 17:17 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Indeed. I listened to your talk at Bernstein Conf and got more details again from Adrien Peyrache's VVTNS talk a few weeks ago. Great work and really engaging story!

29.01.2026 16:28 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Completely agree๐Ÿซก Weโ€™ve got a freshly baked contribution along this path:
bsky.app/profile/xxli...

28.01.2026 02:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Would love to hear thoughts from both ML and neuroscience folks on using RL as a functional testbed for brain circuit models๐Ÿง ๐Ÿค–๐ŸŽฐ

28.01.2026 02:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Takeaway:

Modern deep reinforcement learning provides a principled testbed for hippocampal circuit hypotheses, supporting a view in which intrinsic CA3 sequence dynamics scaffold spatial representations from egocentric experience rather than merely reflecting replay or planning.

8/8

28.01.2026 02:22 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Unlike many prior approaches that explicitly encourage spatial structure (e.g. via mapping or auxiliary losses), our model includes no spatial objectives.

Nevertheless, structured spatial tuning emerges during navigation.

7/n

28.01.2026 02:22 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
place field map: white box with cross are obstacles.

spatial kernel: measured by the correlation of population activity between pairs of spatial locations, averaged by the same displacement, distance, or orientation.

place field map: white box with cross are obstacles. spatial kernel: measured by the correlation of population activity between pairs of spatial locations, averaged by the same displacement, distance, or orientation.

The sequence-based agent develops place-cellโ€“like spatial tuning and distance-dependent representational similarity of spatial locations.

By contrast, LSTM agents trained on the same tasks do not form comparably structured spatial representations.

6/n

28.01.2026 02:21 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
sparse input like in the DG->CA3 model. dense input where batch-normalization and high thresholding was removed. 

CA3: our CA3 model with L=64 and R=8. 

RandRNN: randomly initialized fixed RNN of the same state size. 

SSM_LegS: fixed SSM HiPPO-LegS from Gu et al. (2020) with the same state size. 

LSTM: trainable LSTM with matching number of parameters.

sparse input like in the DG->CA3 model. dense input where batch-normalization and high thresholding was removed. CA3: our CA3 model with L=64 and R=8. RandRNN: randomly initialized fixed RNN of the same state size. SSM_LegS: fixed SSM HiPPO-LegS from Gu et al. (2020) with the same state size. LSTM: trainable LSTM with matching number of parameters.

With sparse sensory encoding, agents using intrinsic sequences learn faster and more stably than standard recurrent baselines, including LSTM agents.

This advantage largely disappears when input is dense.

5/n

28.01.2026 02:18 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Virtual environment (19 ร— 19 tiles) were constructed using deepmind lab with walls randomly placed on 15 % of the tiles. Wall layouts are kept fixed for repeated trials, with an invisible reward near the bottom right. In each episode, the agent was initially placed at a random location at least 5 tiles away from the goal. 

The agent receives a first person perspective vision input that is processed via a visual encoder (shallow ResNet with 3 convolutional blocks; matching the SOTA in deepmind lab environment (Espeholt et al., 2018), pretrained and fixed in our experiments). These output was linearly mapped to F=16 features (FC: fully connected layer), and then sparsified using batch normalization and high thresholding (ฯ„ = 2.43), such that the percentage of activation (โˆผ 2.5%) matches the sparse activity of DG granular cells that project to CA3. CA3 is modeled as sequences of neurons each for a DG input feature. The activity of all CA3 neurons are then flattened and linearly mapped to the Decoder multilayer perceptron. The visual encoder is pretrained and fixed. CA3 is
hard coded to isolate the effect of long range integration. The DG and decoder modules are trained.

Virtual environment (19 ร— 19 tiles) were constructed using deepmind lab with walls randomly placed on 15 % of the tiles. Wall layouts are kept fixed for repeated trials, with an invisible reward near the bottom right. In each episode, the agent was initially placed at a random location at least 5 tiles away from the goal. The agent receives a first person perspective vision input that is processed via a visual encoder (shallow ResNet with 3 convolutional blocks; matching the SOTA in deepmind lab environment (Espeholt et al., 2018), pretrained and fixed in our experiments). These output was linearly mapped to F=16 features (FC: fully connected layer), and then sparsified using batch normalization and high thresholding (ฯ„ = 2.43), such that the percentage of activation (โˆผ 2.5%) matches the sparse activity of DG granular cells that project to CA3. CA3 is modeled as sequences of neurons each for a DG input feature. The activity of all CA3 neurons are then flattened and linearly mapped to the Decoder multilayer perceptron. The visual encoder is pretrained and fixed. CA3 is hard coded to isolate the effect of long range integration. The DG and decoder modules are trained.

We embed a minimal, interpretable DGโ€“CA3โ€“like sequence generator core in an end-to-end actorโ€“critic agent operating in a realistic navigation environment, without auxiliary objectives.

4/n

28.01.2026 02:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
 Illustration of theta sequences observed in rodent hippocampus. In
each theta cycle, R = 3 neurons are activated and the activation propagates over L = 4 theta cycles in a sequence of โ„“ = L + R โˆ’ 1 = 6 neurons.

The theta sequences are thought to be driven by
sequential inputs despite the recurrent connections in hippocampus. 

The recurrent connections could support generating long horizon sequential activity without sequential external inputs.

Illustration of theta sequences observed in rodent hippocampus. In each theta cycle, R = 3 neurons are activated and the activation propagates over L = 4 theta cycles in a sequence of โ„“ = L + R โˆ’ 1 = 6 neurons. The theta sequences are thought to be driven by sequential inputs despite the recurrent connections in hippocampus. The recurrent connections could support generating long horizon sequential activity without sequential external inputs.

At a mechanistic level, hippocampal circuits generate intrinsic activity sequences even when sequential sensory input is sparse or absent.

This suggests intrinsic sequence dynamics as a plausible substrate for constructing spatial representations from egocentric experience.

3/n

28.01.2026 02:04 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In real-world navigation, the sensory stream is ambiguous and policy-dependent, while spatially informative โ€œlandmarks" are sparse.

Many biological models emphasize interpretability but lack task-level realism, while engineering approaches achieve competence with limited mechanistic insight.

2/n

28.01.2026 02:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Emergence of Spatial Representation in an Actor-Critic Agent with... Sequential activation of place-tuned neurons in an animal during navigation is typically interpreted as reflecting the sequence of input from adjacent positions along the trajectory. More recent...

๐ŸŽ‰ Accepted at ICLR 2026! ๐ŸŽ‰

We show thatย place-cellโ€“like spatial representations can emergeย in a deep RL agent withย structured recurrent dynamics (like hippocampus๐ŸŒŠ๐Ÿด),ย without explicit spatial supervision.

PDF: openreview.net/forum?id=li1...

28.01.2026 02:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
Davos 2026: Special address by Mark Carney, PM of Canada Canadian PM Mark Carney stressed the end of the rules-based international order and urged middle powers to act together to counter the great power rivalry.

Would be proud to be a Canadian in this interesting time.

www.weforum.org/stories/2026...

21.01.2026 14:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
How Intrinsic Motivation Underlies Embodied Open-Ended Behavior Although most theories posit that natural behavior can be explained as maximizing some form of extrinsic reward, often called utility, some behaviors appear to be reward independent. For instance, spo...

New preprint. A review on Intrinsic Motivation Theories

arxiv.org/abs/2601.10276

In collaboration with an amazing team

16.01.2026 07:29 โ€” ๐Ÿ‘ 14    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

The same thought always comes to me when I hear from people who wanna ban AI in order to protect their jobs.

10.12.2025 20:59 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Neural Subspaces Encode Sequential Working Memory, but Neural Sequences Do Not The neural mechanisms of multiple-item working memory are not well understood. In the current study, we address two competing hypotheses about the neural basis of sequential working memory: neural sub...

New preprint! ๐ŸŽ‰ We investigate how the brain maintains multiple items in working memory, testing two competing hypotheses for sequential memory: neural subspaces vs. neural sequences.
www.biorxiv.org/content/10.1...

10.12.2025 08:06 โ€” ๐Ÿ‘ 23    ๐Ÿ” 11    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

The new elementary school math curriculum in China just dropped! It is much more logical now!

08.12.2025 17:19 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

So cool...

05.12.2025 23:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
How I contributed to rejecting one of my favorite papers of all time I believe we should talk about the mistakes we make.

How I contributed to rejecting one of my favorite papers of all times, Yes, I teach it to students daily, and refer to it in lots of papers. Sorry. open.substack.com/pub/kording/...

02.12.2025 01:27 โ€” ๐Ÿ‘ 118    ๐Ÿ” 28    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 10

The best scientific papers are provocations, not results you can rely on. Discuss!

What I mean is that they should try to force progress by making an outrageous statement that the established field wants to be wrong, but do it so well that proving it wrong is a real challenge.

30.11.2025 00:59 โ€” ๐Ÿ‘ 16    ๐Ÿ” 2    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0
Preview
Geometric properties of musical scales constitute a representational primitive in melodic processing Music; Cognitive Psychology; Cognitive Science

I really like the cogsci thinking and music in this paper. Happy that it's finally out! Thanks @omriraccah.bsky.social and Michael Seltenreich
for leading this project.
Geometric properties of musical scales constitute a representational primitive in melodic processing

www.cell.com/iscience/ful...

14.11.2025 13:34 โ€” ๐Ÿ‘ 24    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Functional organization of the primate prefrontal cortex reflects individual mnemonic strategies Modular organization, the division of the cerebral cortex into functionally distinct subregions, is well established in the primate sensorimotor cortex, but debated in the cognitive association cortex...

Major update to our preprint. Led
by @xuanyuwang.bsky.social, we show that the mesoscopic functional organization of the primate PFC reflects individual cognitive strategies and mnemonic abilities. ๐Ÿง  doi.org/10.1101/2024...

07.06.2025 08:15 โ€” ๐Ÿ‘ 14    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

congrats!

23.09.2025 13:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Sad to miss #CCN2025. It will be the 1st conference where a PhD working w/ me will speak ๐Ÿ˜ญ

go see Lubna's talk (Friday) about distributed neural correlates of flexible decision making in ๐Ÿ’,

work done in collaboration w/ @scottbrincat.bsky.social @siegellab.bsky.social & @earlkmiller.bsky.social

10.08.2025 15:56 โ€” ๐Ÿ‘ 59    ๐Ÿ” 18    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

Dale Schuurmans going in on RL at RLC

06.08.2025 23:01 โ€” ๐Ÿ‘ 22    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

01.07.2025 20:31 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

Enjoying some contemporary art after #neuromonster

30.05.2025 18:31 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
graph with scientist names on the x axis and colored indicators for NIH, NSF, and Other funding. Also indicators for active pre-1960 and Outside US

graph with scientist names on the x axis and colored indicators for NIH, NSF, and Other funding. Also indicators for active pre-1960 and Outside US

When we communicate science, we often don't say where the resources for it come from. This is clearly a mistake: people consuming science should know who is funding it--and if that funding is being taken away. So, I decided to document the funding sources of every scientist mentioned in my book.

28.05.2025 22:46 โ€” ๐Ÿ‘ 107    ๐Ÿ” 35    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Take home message: every researcher thinks state transition is computed in the brain area that they study

28.05.2025 08:58 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Bluesky, bluesky, who's at #neuromonster in Split?

26.05.2025 14:34 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@xxlin is following 20 prominent accounts