David G. Clark's Avatar

David G. Clark

@david-g-clark.bsky.social

Theoretical neuroscientist Grad student @ Columbia dclark.io

384 Followers  |  435 Following  |  89 Posts  |  Joined: 21.11.2024  |  2.1221

Latest posts by david-g-clark.bsky.social on Bluesky

Cool!

31.07.2025 18:13 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Kempner Research Fellow Andy Keller Wants to Improve How AI Systems Represent a Dynamic World - Kempner Institute Humans have a powerful ability to recognize patterns in a dynamic, ever-changing world, allowing for problem-solving and other cognitive abilities that are the hallmark of intelligent behavior. Yet de...

#KempnerInstitute research fellow @andykeller.bsky.social and coauthors Yue Song, Max Welling and Nicu Sebe have a new book out that introduces a framework for developing equivariant #AI & #neuroscience models. Read more:

kempnerinstitute.harvard.edu/news/kempner...

#NeuroAI

29.07.2025 17:42 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Representational drift without synaptic plasticity Neural computations support stable behavior despite relying on many dynamically changing biological processes. One such process is representational drift (RD), in which neurons' responses change over ...

When neurons change, but behavior doesnโ€™t: Excitability changes driving representational drift

New preprint of work with Christian Machens: www.biorxiv.org/content/10.1...

29.07.2025 14:02 โ€” ๐Ÿ‘ 67    ๐Ÿ” 28    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
SciPost: SciPost Phys. Lect. Notes 84 (2024) - Ambitions for theory in the physics of life SciPost Journals Publication Detail SciPost Phys. Lect. Notes 84 (2024) Ambitions for theory in the physics of life

The summer schools at Les Houches are a magnificent tradition. I was honored to lecture there in 2023, and my notes now are published as "Ambitions for theory in the physics of life." #physics #physicsoflife scipost.org/SciPostPhysL...

25.07.2025 21:57 โ€” ๐Ÿ‘ 22    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

Trying to train RNNs in a biol plausible (local) way? Well, try our new method using predictive alignment. Paper just out in Nat. Com. Toshitake Asabuki deserves all the credit!
www.nature.com/articles/s41...

23.07.2025 12:10 โ€” ๐Ÿ‘ 54    ๐Ÿ” 16    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Flow Equivariant Recurrent Neural Networks - Kempner Institute Sequence transformations, like visual motion, dominate the world around us, but are poorly handled by current models. We introduce the first flow equivariant models that respect these motion symmetrie...

New in the #DeeperLearningBlog: #KempnerInstitute research fellow @andykeller.bsky.social introduces the first flow equivariant neural networks, which reflect motion symmetries, greatly enhancing generalization and sequence modeling.

bit.ly/451fQ48

#AI #NeuroAI

22.07.2025 13:21 โ€” ๐Ÿ‘ 8    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Excited!

16.07.2025 19:05 โ€” ๐Ÿ‘ 30    ๐Ÿ” 1    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

๐Ÿ†’

14.07.2025 22:24 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Independent of how one defines โ€œmechanism,โ€ a reasonable program seems to be
1. Figure out if (approx) attractor dynamics occur
2. If so, figure out what kind of model is being implemented
Where nailing each step requires various manipulations (or connectome) beyond passive neural recordings. Agree?

10.07.2025 17:38 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
Reply to Kording Bluesky Post | Claude Reply to Kording Bluesky Post - interactive HTML page created with Claude.

Thanks a lot for the post! Some thoughts:
claude.ai/public/artif...

08.07.2025 16:49 โ€” ๐Ÿ‘ 9    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Exploring neural manifolds across a wide range of intrinsic dimensions https://www.biorxiv.org/content/10.1101/2025.07.01.662533v1

04.07.2025 19:15 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
From memories to maps: Mechanisms of in context reinforcement learning in transformers Humans and animals show remarkable learning efficiency, adapting to new environments with minimal experience. This capability is not well captured by standard reinforcement learning algorithms that re...

Humans and animals can rapidly learn in new environments. What computations support this? We study the mechanisms of in-context reinforcement learning in transformers, and propose how episodic memory can support rapid learning. Work w/ @kanakarajanphd.bsky.social : arxiv.org/abs/2506.19686

26.06.2025 19:01 โ€” ๐Ÿ‘ 73    ๐Ÿ” 24    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1

Spatially and non-spatially tuned hippocampal neurons are linear perceptual and nonlinear memory encoders https://www.biorxiv.org/content/10.1101/2025.06.23.661173v1

25.06.2025 06:15 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Learning dynamics in linear recurrent neural networks Recurrent neural networks (RNNs) are powerful models used widely in both machine learning and neuroscience to learn tasks with temporal dependencies and to model neural dynamics. However, despite...

How do task dynamics impact learning in networks with internal dynamics?

Excited to share our ICML Oral paper on learning dynamics in linear RNNs!
with @clementinedomine.bsky.social @mpshanahan.bsky.social and Pedro Mediano

openreview.net/forum?id=KGO...

20.06.2025 17:28 โ€” ๐Ÿ‘ 33    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Highโ€“resolution laminar recordings reveal structureโ€“function relationships in monkey V1 The relationship between the structural properties of diverse neuronal populations in the monkey primary visual cortex (V1) and their functional visual processing in vivo remains a critical knowledge ...

On behalf of Nicole Carr: new preprint from the Chand/Moore labs! High-resolution laminar recordings reveal structure-function relationships in monkey V1.

1/4

www.biorxiv.org/content/10.1...

19.05.2025 21:08 โ€” ๐Ÿ‘ 58    ๐Ÿ” 11    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

Many recent posts on free energy. Here is a summary from my class โ€œStatistical mechanics of learning and computationโ€ on the many relations between free energy, KL divergence, large deviation theory, entropy, Boltzmann distribution, cumulants, Legendre duality, saddle points, fluctuation-responseโ€ฆ

02.05.2025 19:22 โ€” ๐Ÿ‘ 63    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Statistical Mechanics of Transfer Learning in Fully Connected Networks in the Proportional Limit Tools from spin glass theory such as the replica method help explain the efficacy of transfer learning.

Our paper on the statistical mechanics of transfer learning is now published in PRL. Franz-Parisi meets Kernel Renormalization in this nice collaboration with friends in Bologna (F. Gerace) and Parma (P. Rodondo, R. Pacelli).
journals.aps.org/prl/abstract...

01.05.2025 16:13 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Neat! Thanks a ton for the responses -- very helpful. ๐Ÿ™‚
This is a cool example.

30.04.2025 14:14 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Neuronal correlations shape the scaling behavior of memory capacity and nonlinear computational capability of recurrent neural networks Reservoir computing is a powerful framework for real-time information processing, characterized by its high computational ability and quick learning, with applications ranging from machine learning to biological systems. In this paper, we demonstrate that the memory capacity of a reservoir recurrent neural network scales sublinearly with the number of readout neurons. To elucidate this phenomenon, we develop a theoretical framework for analytically deriving memory capacity, attributing the decaying growth of memory capacity to neuronal correlations. In addition, numerical simulations reveal that once memory capacity becomes sublinear, increasing the number of readout neurons successively enables nonlinear processing at progressively higher polynomial orders. Furthermore, our theoretical framework suggests that neuronal correlations govern not only memory capacity but also the sequential growth of nonlinear computational capabilities. Our findings establish a foundation for designing scalable and cost-effective reservoir computing, providing novel insights into the interplay among neuronal correlations, linear memory, and nonlinear processing.

Cool!
arxiv.org/abs/2504.19657

30.04.2025 14:04 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

So, is the key message that eigendecomposition provides more insight than SVD (or similarly that we must consider the overlaps between singular vectors, which would be negative in the negative-eigenvalue case)?

24.04.2025 13:25 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Looks cool!
This assumes LR structure is linked to a neg eigval. But alternatives exist: stable linear dynamics w/ LR structure could have tiny bulk & nearly-unstable pos eigval (1-ฮต). Or nonlinear networks could have large bulk & large pos eigval. Neither shows "low-rank suppression," correct?

24.04.2025 13:23 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Stereo olfaction underlies stable coding of head direction in blind mice - Nature Communications Stereo olfaction involves comparing odor differences between the two nostrils. Here, using neuronal recordings and a behavioral test, the authors demonstrate that blind mice use stereo olfaction to fo...

๐Ÿšจnew paper alert
Blind mice use stereo olfaction, comparing smells between nostrils, to maintain a stable sense of direction. Blocking this ability disrupts their internal compass.

Kudos to @kasumbisa.bsky.social! Another cool chapter of the Trenholm-Peyrache collab๐Ÿ˜‰

www.nature.com/articles/s41...

14.04.2025 18:37 โ€” ๐Ÿ‘ 114    ๐Ÿ” 34    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 5

Woah

28.03.2025 11:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Also! Check out "A theory of multi-task computation and task selection" (poster 2-98) by Owen Marschall, with me and Ashok Litwin-Kumar. Owen analyzes RNNs that embed lots of tasks in different subspaces and transition between a "spontaneous" state and task-specific dynamics via phase transitions.

26.03.2025 21:38 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding

How does barcode activity in the hippocampus enable precise and flexible memory? How does this relate to key-value memory systems? Our work (w/ Jack Lindsey, Larry Abbott, Dmitriy Aronov, @selmaan.bsky.social ) is now in eLife as a reviewed preprint: elifesciences.org/reviewed-pre...

24.03.2025 19:46 โ€” ๐Ÿ‘ 21    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

I'll be presenting this at #cosyne2025 (poster 3-50)!

I'll also be giving a talk at the "Collectively Emerged Timescales" workshop on this work, plus other projects on emergent dynamics in neural circuits.

Looking forward to seeing everyone in ๐Ÿ‡จ๐Ÿ‡ฆ!

26.03.2025 18:54 โ€” ๐Ÿ‘ 12    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Key-value memory in the brain Classical models of memory in psychology and neuroscience rely on similarity-based retrieval of stored patterns, where similarity is a function of retโ€ฆ

Our paper on key-value memory in the brain (updated from the preprint version) is now out in Neuron @cp-neuron.bsky.social
authors.elsevier.com/a/1kqIJ3BtfH...

26.03.2025 15:09 โ€” ๐Ÿ‘ 50    ๐Ÿ” 13    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

We know dopamine guides reinforcement learning in externally rewarded behaviorsโ€”think a mouse learning to press a lever for food or juice. But what about skills like speech or athletics, where thereโ€™s no explicit external reward, just an internal goal to match? ๐Ÿงต (1/7)

17.03.2025 17:48 โ€” ๐Ÿ‘ 22    ๐Ÿ” 9    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

Transformers employ different strategies through training to minimize loss, but how do these tradeoff and why?

Excited to share our newest work, where we show remarkably rich competitive and cooperative interactions (termed "coopetition") as a transformer learns.

Read on ๐Ÿ”Žโฌ

11.03.2025 07:13 โ€” ๐Ÿ‘ 7    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Natural behaviour is learned through dopamine-mediated reinforcement - Nature Studies in zebra finches show that dopamine has a key role as a reinforcement signal in the trial-and-error process of learning that underlies complex natural behaviours.

New paper from Kasdin and Duffy (feat. @pantamallion.bsky.social, @neurokim.bsky.social, and others) on how dopamine shapes song-learning trajectories in juvenile birds.

www.nature.com/articles/s41...

12.03.2025 17:17 โ€” ๐Ÿ‘ 15    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@david-g-clark is following 20 prominent accounts