Ching Fang's Avatar

Ching Fang

@chingfang.bsky.social

Postdoc @Harvard interested in neuro-AI and neurotheory. Previously @columbia, @ucberkeley, and @apple. ๐Ÿง ๐Ÿงช๐Ÿค–

336 Followers  |  257 Following  |  21 Posts  |  Joined: 13.11.2024  |  2.3803

Latest posts by chingfang.bsky.social on Bluesky

Feature-specific threat coding in lateral septum guides defensive action The ability to rapidly detect and evaluate potential threats is essential for survival and requires the integration of sensory information, with internal state and prior experience. The lateral septum...

Our new preprint: ๐…๐ž๐š๐ญ๐ฎ๐ซ๐ž-๐ฌ๐ฉ๐ž๐œ๐ข๐Ÿ๐ข๐œ ๐ญ๐ก๐ซ๐ž๐š๐ญ ๐œ๐จ๐๐ข๐ง๐  ๐ข๐ง ๐ฅ๐š๐ญ๐ž๐ซ๐š๐ฅ ๐ฌ๐ž๐ฉ๐ญ๐ฎ๐ฆ ๐ ๐ฎ๐ข๐๐ž๐ฌ ๐๐ž๐Ÿ๐ž๐ง๐ฌ๐ข๐ฏ๐ž ๐š๐œ๐ญ๐ข๐จ๐ง.

We describe how the LS guides defensive responses by forming critical computations built from functionally and molecularly distinct cells and their afferent inputs.

www.researchsquare.com/article/rs-6...

16.06.2025 12:38 โ€” ๐Ÿ‘ 23    ๐Ÿ” 10    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 2

Oh cool, thanks for sharing, It does seem like we see very similar things! We should definitely chat ๐Ÿ˜€

27.06.2025 14:32 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In conclusion: Studying the cognitive computations behind rapid learning requires a broader hypothesis space of planning than standard RL. In both tasks, strategies use intermediate computations cached in memory tokens-- episodic memory itself can be a computational workspace!

26.06.2025 19:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

In tree mazes, we find a strategy where in-context experience is stitched together to label a critical path from root to goal. If a query state is on this path, an action is chosen to traverse deeper into the tree. If not, the action to go to parent node is optimal. (8/9)

26.06.2025 19:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Instead, our analysis of the model in gridworld suggests the following strategy: (1) Use in-context experience to align representations to Euclidean space, (2) Given a query state, calculate the angle in Euclidean space to goal, (3) Use the angle to select an action. (7/9)

26.06.2025 19:01 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Interestingly, when we examine the mechanisms used by the model for decision making, we do not see signatures expected from standard model-free and model-based learning-- the model doesn't use value learning or path planning/state tracking at decision time. (6/9)

26.06.2025 19:01 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

We find a few representation learning strategies: (1) in-context structure learning to form a map of the environment and (2) alignment of representations across contexts with the same structure. These connect to computations suggested in hippocampal-entorhinal cortex. (5/9)

26.06.2025 19:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

As expected, these meta-learned models learn more efficiently in new environments than standard RL since they have useful priors over the task distribution. For instance, models can take shortcut paths in gridworld. So what RL strategies emerged to support this? (4/9)

26.06.2025 19:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We train transformers to in-context RL (via decision-pretraining from Lee et al 2023) in planning tasks: gridworld and tree mazes (inspired by labyrinth mazes: elifesciences.org/articles/66175). Importantly, each new task has novel sensory observations. (3/9)

26.06.2025 19:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding

Transformers are a useful setting for studying these questions because they can learn rapidly in-context. But also, key-value architectures have been connected to episodic memory systems in the brain! e.g. see our previous work (of many others) (2/9): elifesciences.org/reviewed-pre...

26.06.2025 19:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
From memories to maps: Mechanisms of in context reinforcement learning in transformers Humans and animals show remarkable learning efficiency, adapting to new environments with minimal experience. This capability is not well captured by standard reinforcement learning algorithms that re...

Humans and animals can rapidly learn in new environments. What computations support this? We study the mechanisms of in-context reinforcement learning in transformers, and propose how episodic memory can support rapid learning. Work w/ @kanakarajanphd.bsky.social : arxiv.org/abs/2506.19686

26.06.2025 19:01 โ€” ๐Ÿ‘ 80    ๐Ÿ” 25    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 3
Post image

๐Ÿš€ An other Exciting news! Our paper "From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks" has been accepted at ICLR 2025!

arxiv.org/abs/2409.14623

A thread on how relative weight initialization shapes learning dynamics in deep networks. ๐Ÿงต (1/9)

04.04.2025 14:45 โ€” ๐Ÿ‘ 29    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Already feeling #cosyne2025 withdrawal? Apply to the Flatiron Institute Junior Theoretical Neuroscience Workshop! Applications due April 14th

jtnworkshop2025.flatironinstitute.org

02.04.2025 16:49 โ€” ๐Ÿ‘ 9    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
CDS building which looks like a jenga tower

CDS building which looks like a jenga tower

Life update: I'm starting as faculty at Boston University
@bucds.bsky.social in 2026! BU has SCHEMES for LM interpretability & analysis, I couldn't be more pumped to join a burgeoning supergroup w/ @najoung.bsky.social @amuuueller.bsky.social. Looking for my first students, so apply and reach out!

27.03.2025 02:24 โ€” ๐Ÿ‘ 244    ๐Ÿ” 13    ๐Ÿ’ฌ 35    ๐Ÿ“Œ 7
Preview
About โ€” Hands Off!

What are you plans for April 5th? Decide now which event you'll attend and who you'll bring along. See you in the streets!
handsoff2025.com/about

26.03.2025 18:13 โ€” ๐Ÿ‘ 14    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

I'll be presenting this at #cosyne2025 (poster 3-50)!

I'll also be giving a talk at the "Collectively Emerged Timescales" workshop on this work, plus other projects on emergent dynamics in neural circuits.

Looking forward to seeing everyone in ๐Ÿ‡จ๐Ÿ‡ฆ!

26.03.2025 18:54 โ€” ๐Ÿ‘ 12    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our paper, โ€œA Theory of Initializationโ€™s Impact on Specialization,โ€ has been accepted to ICLR 2025!
openreview.net/forum?id=RQz...
We shows how neural network can build specialized and shared representation depending on initialization, this has consequences in continual learning.
(1/8)

26.03.2025 17:38 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

We'll have a poster on this at #Cosyne2025 in the third poster session (3-055). Come say hi if you're curious!

24.03.2025 19:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Key-value memory in the brain Classical models of memory in psychology and neuroscience rely on similarity-based retrieval of stored patterns, where similarity is a function of retrieval cues and the stored patterns. While parsimo...

In particular, barcodes are a plausible neural correlate for the precise slot retrieval mechanism in key-value memory systems (see arxiv.org/abs/2501.02950)! Barcodes provide a content-independent scaffold that binds to memory content, + prevent memories with overlapping content from blurring.

24.03.2025 19:46 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Why is this useful? We show that place fields + barcode are complementary. Barcodes enable precise recall of cache locations, while place fields enable flexible search for nearby caches. Both are necessary. We also show how barcode memory combines with predictive maps-- check out the paper for more!

24.03.2025 19:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

A memory of a cache is formed by binding place + seed content to the resulting RNN barcode via Hebbian learning. An animal can recall this memory from place inputs (and high recurrent strength in the RNN). These barcodes capture the spatial correlation profile seen in data.

24.03.2025 19:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We suggest a RNN model of barcode memory. The RNN is initialized with random weights and receives place inputs. When recurrent gain is low, RNN units encode place. With high recurrent strength, the random weights produce sparse + uncorrelated barcodes via chaotic dynamics.

24.03.2025 19:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We were inspired by @selmaan.bsky.social and Emily Mackevicius' data of neural activity in the hippocampus of food-caching birds during a memory task. Cache events are encoded by barcode activity, which are sparse and uncorrelated patterns. Barcode and place activity coexist in the same population!

24.03.2025 19:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding

How does barcode activity in the hippocampus enable precise and flexible memory? How does this relate to key-value memory systems? Our work (w/ Jack Lindsey, Larry Abbott, Dmitriy Aronov, @selmaan.bsky.social ) is now in eLife as a reviewed preprint: elifesciences.org/reviewed-pre...

24.03.2025 19:46 โ€” ๐Ÿ‘ 21    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

Weโ€™re organizing a #CoSyNe2025 workshop on what agent models can teach us about neuroscience! See Satโ€™s thread for more info ๐Ÿ˜Š

18.02.2025 19:08 โ€” ๐Ÿ‘ 15    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Promoting cross-modal representations to improve multimodal foundation models for physiological signals Many healthcare applications are inherently multimodal, involving several physiological signals. As sensors for these signals become more common, improving machine learning methods for multimodal heal...

If youโ€™re interested in foundation model approaches for combining EEG/EMG/ECG/EOG data, check out our work at the Neurips AIM-FM workshop tomorrow! This was work done with Appleโ€™s Body-Sensing Intelligence Group ๐Ÿง  ๐Ÿค– arxiv.org/abs/2410.16424

13.12.2024 19:48 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks for putting this together! Would also love to be added :)

26.11.2024 23:16 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks for making this Amy! Would also like to be added if possible :)

26.11.2024 20:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@chingfang is following 20 prominent accounts