I guess in comparison, our work flips the narrative around, where sequential connection (no matter how/why that's achieved, e.g. optimizing prediction, or prewired) simply gives rise to intrinsic sequences. And the place cells, even in the online mode, is a result of it.
29.01.2026 17:17 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Indeed. I listened to your talk at Bernstein Conf and got more details again from Adrien Peyrache's VVTNS talk a few weeks ago. Great work and really engaging story!
29.01.2026 16:28 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Completely agree๐ซก Weโve got a freshly baked contribution along this path:
bsky.app/profile/xxli...
28.01.2026 02:36 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Would love to hear thoughts from both ML and neuroscience folks on using RL as a functional testbed for brain circuit models๐ง ๐ค๐ฐ
28.01.2026 02:27 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Takeaway:
Modern deep reinforcement learning provides a principled testbed for hippocampal circuit hypotheses, supporting a view in which intrinsic CA3 sequence dynamics scaffold spatial representations from egocentric experience rather than merely reflecting replay or planning.
8/8
28.01.2026 02:22 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Unlike many prior approaches that explicitly encourage spatial structure (e.g. via mapping or auxiliary losses), our model includes no spatial objectives.
Nevertheless, structured spatial tuning emerges during navigation.
7/n
28.01.2026 02:22 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
place field map: white box with cross are obstacles.
spatial kernel: measured by the correlation of population activity between pairs of spatial locations, averaged by the same displacement, distance, or orientation.
The sequence-based agent develops place-cellโlike spatial tuning and distance-dependent representational similarity of spatial locations.
By contrast, LSTM agents trained on the same tasks do not form comparably structured spatial representations.
6/n
28.01.2026 02:21 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
sparse input like in the DG->CA3 model. dense input where batch-normalization and high thresholding was removed.
CA3: our CA3 model with L=64 and R=8.
RandRNN: randomly initialized fixed RNN of the same state size.
SSM_LegS: fixed SSM HiPPO-LegS from Gu et al. (2020) with the same state size.
LSTM: trainable LSTM with matching number of parameters.
With sparse sensory encoding, agents using intrinsic sequences learn faster and more stably than standard recurrent baselines, including LSTM agents.
This advantage largely disappears when input is dense.
5/n
28.01.2026 02:18 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Virtual environment (19 ร 19 tiles) were constructed using deepmind lab with walls randomly placed on 15 % of the tiles. Wall layouts are kept fixed for repeated trials, with an invisible reward near the bottom right. In each episode, the agent was initially placed at a random location at least 5 tiles away from the goal.
The agent receives a first person perspective vision input that is processed via a visual encoder (shallow ResNet with 3 convolutional blocks; matching the SOTA in deepmind lab environment (Espeholt et al., 2018), pretrained and fixed in our experiments). These output was linearly mapped to F=16 features (FC: fully connected layer), and then sparsified using batch normalization and high thresholding (ฯ = 2.43), such that the percentage of activation (โผ 2.5%) matches the sparse activity of DG granular cells that project to CA3. CA3 is modeled as sequences of neurons each for a DG input feature. The activity of all CA3 neurons are then flattened and linearly mapped to the Decoder multilayer perceptron. The visual encoder is pretrained and fixed. CA3 is
hard coded to isolate the effect of long range integration. The DG and decoder modules are trained.
We embed a minimal, interpretable DGโCA3โlike sequence generator core in an end-to-end actorโcritic agent operating in a realistic navigation environment, without auxiliary objectives.
4/n
28.01.2026 02:14 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Illustration of theta sequences observed in rodent hippocampus. In
each theta cycle, R = 3 neurons are activated and the activation propagates over L = 4 theta cycles in a sequence of โ = L + R โ 1 = 6 neurons.
The theta sequences are thought to be driven by
sequential inputs despite the recurrent connections in hippocampus.
The recurrent connections could support generating long horizon sequential activity without sequential external inputs.
At a mechanistic level, hippocampal circuits generate intrinsic activity sequences even when sequential sensory input is sparse or absent.
This suggests intrinsic sequence dynamics as a plausible substrate for constructing spatial representations from egocentric experience.
3/n
28.01.2026 02:04 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
In real-world navigation, the sensory stream is ambiguous and policy-dependent, while spatially informative โlandmarks" are sparse.
Many biological models emphasize interpretability but lack task-level realism, while engineering approaches achieve competence with limited mechanistic insight.
2/n
28.01.2026 02:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
The same thought always comes to me when I hear from people who wanna ban AI in order to protect their jobs.
10.12.2025 20:59 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
The new elementary school math curriculum in China just dropped! It is much more logical now!
08.12.2025 17:19 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
So cool...
05.12.2025 23:18 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
How I contributed to rejecting one of my favorite papers of all time
I believe we should talk about the mistakes we make.
How I contributed to rejecting one of my favorite papers of all times, Yes, I teach it to students daily, and refer to it in lots of papers. Sorry. open.substack.com/pub/kording/...
02.12.2025 01:27 โ ๐ 118 ๐ 28 ๐ฌ 1 ๐ 10
The best scientific papers are provocations, not results you can rely on. Discuss!
What I mean is that they should try to force progress by making an outrageous statement that the established field wants to be wrong, but do it so well that proving it wrong is a real challenge.
30.11.2025 00:59 โ ๐ 16 ๐ 2 ๐ฌ 5 ๐ 0
Geometric properties of musical scales constitute a representational primitive in melodic processing
Music; Cognitive Psychology; Cognitive Science
I really like the cogsci thinking and music in this paper. Happy that it's finally out! Thanks @omriraccah.bsky.social and Michael Seltenreich
for leading this project.
Geometric properties of musical scales constitute a representational primitive in melodic processing
www.cell.com/iscience/ful...
14.11.2025 13:34 โ ๐ 24 ๐ 8 ๐ฌ 0 ๐ 0
congrats!
23.09.2025 13:53 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Sad to miss #CCN2025. It will be the 1st conference where a PhD working w/ me will speak ๐ญ
go see Lubna's talk (Friday) about distributed neural correlates of flexible decision making in ๐,
work done in collaboration w/ @scottbrincat.bsky.social @siegellab.bsky.social & @earlkmiller.bsky.social
10.08.2025 15:56 โ ๐ 59 ๐ 18 ๐ฌ 1 ๐ 0
๐ฅ๐ฅ๐ฅ
01.07.2025 20:31 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Enjoying some contemporary art after #neuromonster
30.05.2025 18:31 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
graph with scientist names on the x axis and colored indicators for NIH, NSF, and Other funding. Also indicators for active pre-1960 and Outside US
When we communicate science, we often don't say where the resources for it come from. This is clearly a mistake: people consuming science should know who is funding it--and if that funding is being taken away. So, I decided to document the funding sources of every scientist mentioned in my book.
28.05.2025 22:46 โ ๐ 107 ๐ 35 ๐ฌ 2 ๐ 0
Take home message: every researcher thinks state transition is computed in the brain area that they study
28.05.2025 08:58 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Bluesky, bluesky, who's at #neuromonster in Split?
26.05.2025 14:34 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Proud dad, Professor of Computational Cognitive Neuroscience, author of The Decoding Toolbox, founder of http://things-initiative.org
our lab ๐ https://hebartlab.com
Space, Sleep, and Spikes | Associate Prof @ McGill, Montreal Neurological Institute | Co-director, The Quebec Sleep Research Network
Agents, memory, representations, robots, vision. Sr Research Scientist at Google DeepMind. Previously at Oxford Robotics Institute. Views my own.
Associate Prof at U Penn. Learning, memory, sleep, neural network modeling...
We're arXiv, the open access repository sharing discoveries and breaking down barriers to cutting edge science for over 30 years.
Bot. I daily tweet progress towards machine learning and computer vision conference deadlines. Maintained by @chriswolfvision.bsky.social
Neuroscientist interested in representations of space & memory. Using tools from experimental & theoretical neuroscience as well as machine learning. @UCL
https://barry-lab.com
Neuro + AI Research Scientist at DeepMind; Affiliate Professor at Columbia Center for Theoretical Neuroscience.
Likes studying learning+memory, hippocampi, and other things brains have and do, too.
she/her.
Neuroscientist, in theory.
Studying sleep and navigation in ๐ง s and ๐ปs.
Wu Tsai Investigator, Assistant Professor of Neuroscience at Yale.
An emergent property of a few billion neurons, their interactions with each other and the world over ~1 century.
CompNeuro Phd-to-be
Dynamical systems, perturbations, development, cortical representations.
EurIPS is a community-organized, NeurIPS-endorsed conference in Copenhagen where you can present papers accepted at @neuripsconf.bsky.social
eurips.cc
neuroscientist in Korea (co-director of IBS-CNIR) interested in how neuroimaging (e.g. fMRI or widefield optical imaging) can facilitate closed-loop causal interventions (e.g. neurofeedback, patterned stimulations). https://tinyurl.com/hakwan
language data science https://moebio.com/
Postdoc @Harvard interested in neuro-AI and neurotheory. Previously @columbia, @ucberkeley, and @apple. ๐ง ๐งช๐ค
We report stories of progress
[bridged from https://fixthenews.com/ on the web: https://fed.brid.gy/web/fixthenews.com ]
Assistant Professor at Gatech CSE. Comp neuro + Machine learning.
Associate Professor of Computational Neuroscience at the University of Amsterdam. Interested in large-scale brain models, neural dynamics and cognition.
Neuroscientists @tum.de | Systems, cognitive, computational | Brain-computer interfaces | Translational Neurotechnology ๐ง ๐ฉ๐ช๐จ๐ณ๐ฎ๐น๐น๐ท๐ช๐ช๐ช๐ธ๐น๐ฉ www.simonjacob.de