Qihong (Q) Lu's Avatar

Qihong (Q) Lu

@qlu.bsky.social

Computational models of episodic memory Postdoc with Daphna Shohamy & Stefano Fusi @ Columbia PhD with Ken Norman & Uri Hasson @ Princeton https://qihongl.github.io/

1,358 Followers  |  649 Following  |  62 Posts  |  Joined: 14.09.2023  |  2.0561

Latest posts by qlu.bsky.social on Bluesky

Video thumbnail

Iโ€™m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...

24.09.2025 09:52 โ€” ๐Ÿ‘ 204    ๐Ÿ” 85    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 8
Preview
New landscape of the diagnosis of Alzheimer's disease Alzheimer's disease involves a drastic departure from the cognitive, functional, and behavioural trajectory of normal ageing, and is both a dreaded and highly prevalent cause of disability to individu...

www.thelancet.com/journals/lan...

23.09.2025 20:09 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences When do machine learning systems fail to generalize, and what mechanisms could improve their generalization? Here, we draw inspiration from cognitive science to argue that one weakness of machine lear...

Why does AI sometimes fail to generalize, and what might help? In a new paper (arxiv.org/abs/2509.16189), we highlight the latent learning gap โ€” which unifies findings from language modeling to agent navigation โ€” and suggest that episodic memory complements parametric learning to bridge it. Thread:

22.09.2025 04:21 โ€” ๐Ÿ‘ 44    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

๐Ÿšจ New paper alert ๐Ÿšจ Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

12.09.2025 10:33 โ€” ๐Ÿ‘ 259    ๐Ÿ” 94    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 19
Post image

Our new lab for Human & Machine Intelligence is officially open at Princeton University!

Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)

08.09.2025 13:59 โ€” ๐Ÿ‘ 50    ๐Ÿ” 15    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
Cognitive scientists and AI researchers make a forceful call to reject โ€œuncritical adoption" of AI in academia A new paper calls on academia to repel rampant AI in university departments and classrooms.

Cognitive scientists and AI researchers make a forceful call to reject โ€œuncritical adoption" of AI in academia
www.bloodinthemachine.com/p/cognitive-...

07.09.2025 23:16 โ€” ๐Ÿ‘ 103    ๐Ÿ” 44    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4
Preview
Sensory Compression as a Unifying Principle for Action Chunking and Time Coding in the Brain The brain seamlessly transforms sensory information into precisely-timed movements, enabling us to type familiar words, play musical instruments, or perform complex motor routines with millisecond pre...

I'm excited to share that my new postdoctoral position is going so well that I submitted a new paper at the end of my first week! www.biorxiv.org/content/10.1... A thread below

06.09.2025 14:35 โ€” ๐Ÿ‘ 55    ๐Ÿ” 11    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

Key-value memory network can learn to represent event memories by their causal relations to support event cognition!
Congrats to @hayoungsong.bsky.social on this exciting paper! So fun to be involved!

05.09.2025 13:07 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Representations of stimulus features in the ventral hippocampus The ventral hippocampus (vHPC) controls emotional response to environmental cues, yet the mechanisms are unclear. Biane et al. examine how positive and negative experiences are encoded by vHPC ensembl...

Representations of stimulus features in the ventral hippocampus

๐Ÿง ๐ŸŸฆ

www.cell.com/neuron/fullt...

30.08.2025 11:44 โ€” ๐Ÿ‘ 36    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Our new study (Titled: Memory Loves Company) asks whether working memory hold more when objects belong together.

And yes, when everyday objects are paired meaningfully (Bow-Arrow), people remember them better than when theyโ€™re unrelated (Glass-Arrow). (mini thread)

28.08.2025 12:07 โ€” ๐Ÿ‘ 69    ๐Ÿ” 15    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0

Now out in print at @jephpp.bsky.social ! doi.org/10.1037/xhp0...

Yu, X., Thakurdesai, S. P., & Xie, W. (2025). Associating everything with everything else, all at once: Semantic associations facilitate visual working memory formation for real-world objects. JEP:HPP.

27.06.2025 01:24 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
What Emotions Really Are In this provocative contribution to the philosophy of science and mind, Paul E. Griffiths criticizes contemporary philosophy and psychology of emotion for failing to take in an evolutionary perspectiv...

Who else argues (in print) that we should eliminate the category of "emotion" as an explanatory target?

I know of Griffiths:
press.uchicago.edu/ucp/books/bo...

And Moors:
www.cambridge.org/core/books/d...

26.08.2025 20:38 โ€” ๐Ÿ‘ 29    ๐Ÿ” 10    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 0
Preview
Longer fMRI brain scans boost reliabilityโ€”but only to a point Around 30 minutes of imaging per person seems to be the โ€œsweet spotโ€ for linking functional connectivity differences to traits in an accurate and cost-effective way.

When powering fMRI studies, sample size is king, but scan duration can also be a powerful tool, improving phenotypic prediction and cost-efficiency, a new analysis shows

By @claudia-lopez.bsky.social

#neuroskyence

www.thetransmitter.org/fmri/longer-...

20.08.2025 14:17 โ€” ๐Ÿ‘ 31    ๐Ÿ” 10    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Post image Post image Post image

Cortico-hippocampal interactions underlie schema-supported memory encoding in older adults

New paper led by @shenyanghuang.bsky.social!
academic.oup.com/cercor/artic...

Older adults' memory benefits from richer semantic contexts. We found connectivity patterns supporting this semantic scaffolding.

19.08.2025 18:26 โ€” ๐Ÿ‘ 16    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Successful prediction of the future enhances encoding of the present.

I am so delighted that this work found a wonderful home at Open Mind. The peer review journey was a rollercoaster but it *greatly* improved the paper.

direct.mit.edu/opmi/article...

09.08.2025 16:27 โ€” ๐Ÿ‘ 77    ๐Ÿ” 22    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Preview
Episodic details are better remembered in plausible relative to implausible counterfactual simulations - Psychonomic Bulletin & Review People often engage in episodic counterfactual thinking, or mentally simulating how the experienced past might have been different from how it was. A commonly held view is that mentally simulating alt...

This was a fun paper to write, and one that fits nicely with some recent work I've been doing on the role of counterfactual simulation in memory encoding. link.springer.com/article/10.3...

06.08.2025 02:27 โ€” ๐Ÿ‘ 17    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Attractor dynamics of working memory explain a concurrent evolution of stimulus-specific and decision-consistent biases in visual estimation People exhibit biases when perceiving features of the world, shaped by both external stimuli and prior decisions. By tracking behavioral, neural, and mechanistic markers of stimulus- and decision-rela...

Excited to share that our paper is now out in Neuron @cp-neuron.bsky.social (dlvr.it/TM9zJ8).

Our perception isn't a perfect mirror of the world. It's often biased by our expectations and beliefs. How do these biases unfold over time, and what shapes their trajectory? A summary thread. (1/13)

29.07.2025 16:02 โ€” ๐Ÿ‘ 40    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Congrats again! Cody!!

28.07.2025 12:29 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Take a look if you are interested in the differences between LLM memory-augmentation vs human episodic memory!
And let us know if you have any feedback!

28.07.2025 12:27 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We put out this preprint a couple months ago, but I really wanted to replicate our findings before we went to publication.

At first, what we found was very confusing!

But when we dug in, it revealed a fascinating neural strategy for how we switch between tasks

doi.org/10.1101/2024.09.29.615736

๐Ÿงต

27.07.2025 21:31 โ€” ๐Ÿ‘ 84    ๐Ÿ” 24    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

@nichols.bsky.social collaborated with researchers at the National University of Singapore on a recent study published in @nature.com on how longer duration fMRI brain scans reduce costs and improve prediction accuracy for AI models. Read more about the study below ๐Ÿ‘‡

22.07.2025 15:47 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Fantastic work by our (now former) lab manager Liv Christiano. We assess the test-retest reliability of OPM and compare it to fMRI and iEEG. ๐Ÿง ๐Ÿ“„๐Ÿงต

19.07.2025 16:48 โ€” ๐Ÿ‘ 28    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
A gradient of complementary learning systems emerges through meta-learning Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...

Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortexโ€“hippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5

16.07.2025 16:15 โ€” ๐Ÿ‘ 63    ๐Ÿ” 24    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 3
Preview
Numerosity coding in the brain: from early visual processing to abstract representations Abstract. Numerosity estimation refers to the ability to perceive and estimate quantities without explicit counting, a skill crucial for both human and ani

Numerosity coding in the brain: from early visual processing to abstract representations
doi.org/10.1093/cerc...
#neuroscience

15.07.2025 11:07 โ€” ๐Ÿ‘ 25    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Attractors are usually not mechanisms The mathematical objects can not be. And the "attractor models" have not been established as mechanisms in mammals

Attractors are usually not mechanisms - new blog post: open.substack.com/pub/kording/...

08.07.2025 14:40 โ€” ๐Ÿ‘ 151    ๐Ÿ” 33    ๐Ÿ’ฌ 20    ๐Ÿ“Œ 9
Music-evoked reactivation during continuous perception is associated with enhanced subsequent recall of naturalistic events Music is a potent cue for recalling personal experiences, yet the neural basis of music-evoked memory remains elusive. We address this question by using the full-length film Eternal Sunshine of the Spotless Mind to examine how repeated musical themes reactivate previously encoded events in cortex and shape next-day recall. Participants in an fMRI study viewed either the original film (with repeated musical themes) or a no-music version. By comparing neural activity patterns between these groups, we found that music-evoked reactivation of neural patterns linked to earlier scenes in the default mode network was associated with improved subsequent recall. This relationship was specific to the music condition and persisted when we controlled for a proxy measure of initial encoding strength (spatial intersubject correlation), suggesting that music-evoked reactivation may play a role in making event memories stick that is distinct from what happens at initial encoding. ### Competing Interest Statement The authors have declared no competing interest. National Institutes of Health, https://ror.org/01cwqze88, F99 NS118740, R01 MH112357

Music is an incredibly powerful retrieval cue. What is the neural basis of music-evoked memory reactivation? And how does this reactivation relate to later memory for the retrieved events? In our new study, we used Eternal Sunshine of the Spotless Mind to find out. www.biorxiv.org/content/10.1...

08.07.2025 14:05 โ€” ๐Ÿ‘ 53    ๐Ÿ” 21    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5
Post image

Check out Zaid's open "Podcast" ECoG dataset for natural language comprehension (w/ Hasson Lab). The paper is now out at Scientific Data (nature.com/articles/s41...) and the data are available on OpenNeuro (openneuro.org/datasets/ds0...).

07.07.2025 21:00 โ€” ๐Ÿ‘ 40    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Benchmarking methods for mapping functional connectivity in the brain - Nature Methods In this Analysis, Liu et al. benchmark more than 200 pairwise statistics for functional brain connectivity in tasks such as hub mapping, distance relationships, structureโ€“function coupling and behavio...

Nature Methods

Benchmarking methods for mapping functional connectivity in the brain

www.nature.com/articles/s41...

05.07.2025 23:58 โ€” ๐Ÿ‘ 18    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
What dopamine teaches depends on what the brain believes - Nature Neuroscience How does the brain learn to predict rewards? In this issue of Nature Neuroscience, Qian, Burrell et al. show that understanding how dopamine guides learning requires knowledge of how animals interpret...

Nature Neuroscience

What dopamine teaches depends on what the brain believes

www.nature.com/articles/s41...

11.06.2025 13:46 โ€” ๐Ÿ‘ 42    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Didn't see one for Neuroscience yet; allow me to oblige

18.11.2024 11:03 โ€” ๐Ÿ‘ 177    ๐Ÿ” 45    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 5

@qlu is following 20 prominent accounts