NEW: #Kempner researchers develop a mean-field theory of task-trained RNNs that bridges random and learned connectivityβand find macaque motor cortex is best captured by an intermediate, task-specific recurrent structure.
Read the blog post π
π bit.ly/47f3Ldl
04.03.2026 16:46 β
π 23
π 9
π¬ 2
π 0
I am totally pumped about this new work . "Task-trained RNNs" are a powerful and influential framework in neuroscience, but have lacked a firm theoretical footing. This work provides one, and makes direct contact with the classical theory of random RNNs:
www.biorxiv.org/content/10.6...
04.03.2026 17:12 β
π 84
π 31
π¬ 2
π 3
Looking forward to presenting on "How behavior shapes recurrent circuits across sensory systems and species: from vision to touch" at the University of Chicago Neuroscience and ML workshop on Wednesday! Details below ππ§΅
23.02.2026 23:36 β
π 15
π 4
π¬ 1
π 1
Iβm deeply thankful to my supervisor, Mark Brandon (@markbrandonlab.bsky.social) for his patience, guidance, and constant support throughout this project, and to our collaborators in the Cengiz Pehlevan (@cpehlevan.bsky.social) lab at Harvard for their thoughtful and generous contributions.
16.01.2026 18:48 β
π 1
π 1
π¬ 0
π 0
All theory is wrong until verified by data. Greatly indebted to @mhyaghoubi.bsky.social, @markbrandonlab.bsky.social, @douglasresearch.bsky.social for finding the hippocampus encoding reward prediction! Grateful to my advisor @cpehlevan.bsky.social, @kempnerinstitute.bsky.social.
#RL #hippocampus
19.01.2026 09:32 β
π 31
π 9
π¬ 0
π 1
Delighted to have contributed to this work. Huge kudos to everyone involved.
15.01.2026 00:20 β
π 14
π 2
π¬ 1
π 0
Iβm very happy to share the latest from my lab published in @Nature
Hippocampal neurons that initially encode reward shift their tuning over the course of days to precede or predict reward.
Full text here:
rdcu.be/eY5nh
14.01.2026 21:32 β
π 104
π 32
π¬ 2
π 2
Very excited about this new work from the omnipotent Owen, with me and Ashok Litwin-Kumar! Can we reconcile low- and high-dimensional activity in neural circuits by recognizing that these circuits ~multitask~?
(Plausibly, yes π)
15.12.2025 20:17 β
π 31
π 4
π¬ 0
π 0
First paper from the lab!
We propose a model that separates estimation of odor concentration and presence and map it on olfactory bulb circuits
Led by @chenjiang01.bsky.social and @mattyizhenghe.bsky.social joint work with @jzv.bsky.social and with @neurovenki.bsky.social @cpehlevan.bsky.social
04.11.2025 15:40 β
π 36
π 13
π¬ 2
π 1
Connectivity Structure and Dynamics of Nonlinear Recurrent Neural Networks
The structure of brain connectivity predicts collective neural activity, with a small number of connectivity features determining activity dimensionality, linking circuit architecture to network-level...
Now in PRX: Theory linking connectivity structure to collective activity in nonlinear RNNs!
For neuro fans: conn. structure can be invisible in single neurons but shape pop. activity
For low-rank RNN fans: a theory of rank=O(N)
For physics fans: fluctuations around DMFT saddleβdimension of activity
03.11.2025 21:47 β
π 60
π 16
π¬ 2
π 2
Applying to do a postdoc or PhD in theoretical ML or neuroscience this year? Consider joining my group (starting next Fall) at UT Austin!
POD Postdoc: oden.utexas.edu/programs-and... CSEM PhD: oden.utexas.edu/academics/pr...
23.10.2025 21:36 β
π 33
π 11
π¬ 1
π 0
William Qian, Cengiz Pehlevan: Discovering alternative solutions beyond the simplicity bias in recurrent neural networks https://arxiv.org/abs/2509.21504 https://arxiv.org/pdf/2509.21504 https://arxiv.org/html/2509.21504
29.09.2025 06:50 β
π 7
π 3
π¬ 0
π 0
NeurIPS 2025 Workshop DBM Findings
Welcome to the OpenReview homepage for NeurIPS 2025 Workshop DBM Findings
β³ Less than 1 day left until the Brain & Mind Workshop submission deadline!
π Submit to our Finding or Tutorials track on OpenReview.
Findings track submission: openreview.net/group?id=Neu...
Tutorial track submission: openreview.net/group?id=Neu...
More info: data-brain-mind.github.io
07.09.2025 16:30 β
π 3
π 1
π¬ 0
π 0
Thank you so much for the kind shoutout! Grateful to be part of such a fantastic team.
07.09.2025 16:45 β
π 1
π 0
π¬ 0
π 0
Frontiers | Summary statistics of learning link changing neural representations to behavior
How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical phy...
Since I'm back on BlueSky - with @frostedblakess.bsky.social and @cpehlevan.bsky.social we wrote a brief perspective on how ideas about summary statistics from the statistical physics of learning could potentially help inform neural data analysis... (1/2)
04.09.2025 18:30 β
π 34
π 10
π¬ 1
π 0
Convergent motifs of early olfactory processing are recapitulated by layer-wise efficient coding
The architecture of early olfactory processing is a striking example of convergent evolution. Typically, a panel of broadly tuned receptors is selectively expressed in sensory neurons (each neuron exp...
Excited to share new computational work, led by @jzv.bsky.social, driven by Juan Carlos Fernandez del Castillo + contribution from Farhad Pashakanloo. We recover 3 core motifs in the olfactory system of evolutionarily distant animals using a biophysically-grounded model + efficient coding ideas!
04.09.2025 16:51 β
π 24
π 12
π¬ 0
π 1
YouTube video by DARPAtv
AIQ: Artificial Intelligence Quantified
Great to have this video about my @darpa.mil Artificial Intelligence Quantified (AIQ) program out! Very exciting program with absolutely fantastic teams. Stay tuned for some jaw dropping announcements!
www.youtube.com/watch?v=KVRF...
02.09.2025 21:50 β
π 7
π 1
π¬ 0
π 0
I am extremely grateful to be awarded the National University of Singapore (NUS) Development Grant, and to be a Young NUS Fellow! Look forward to collaborating with the Yong Loo Lin School of Medicine on exciting projects. This is my first grant and hopefully many more to come! #NUS #NeuroAI
27.08.2025 14:31 β
π 8
π 1
π¬ 1
π 0
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation on Simons Foundation
Our new Simons Collaboration on the Physics of Learning and Neural Computation will develop powerful tools from #physics, #math, computer science and theoretical #neuroscience to understand how large neural networks learn, compute, scale, reason and imagine: www.simonsfoundation.org/2025/08/18/s...
19.08.2025 14:43 β
π 21
π 5
π¬ 0
π 3
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation on Simons Foundation
Congratulations to #KempnerInstitute associate faculty member @cpehlevan.bsky.social for joining the new
@simonsfoundation.org Simons Collaboration on the Physics of Learning and Neural Computation!
www.simonsfoundation.org/2025/08/18/s...
#AI #neuroscience #NeuroAI #physics #ANNs
18.08.2025 18:57 β
π 15
π 3
π¬ 0
π 0
Very excited to lead this new @simonsfoundation.org collaboration on the physics of learning and neural computation to develop powerful tools from physics, math, CS, stats, neuro and more to elucidate the scientific principles underlying AI. See our website for more: www.physicsoflearning.org
18.08.2025 17:48 β
π 92
π 14
π¬ 4
π 1
π¨ Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind
π£ Call for: Findings (4- or 8-page) + Tutorials tracks
ποΈ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social
π Learn more: data-brain-mind.github.io
04.08.2025 15:28 β
π 31
π 10
π¬ 0
π 3
Solvable Model of In-Context Learning Using Linear Attention - Kempner Institute
Attention-based architectures are a powerful force in modern AI. In particular, the emergence of in-context learning enables these models to perform tasks far beyond the original next-token prediction...
New in the #DeeperLearningBlog: the #KempnerInstitute's Mary Letey presents work recently published in PNAS that offers generalizable insights into in-context learning (ICL) in an analytically-solvable model architecture.
bit.ly/4lPK15p
#AI @pnas.org
(1/2)
28.07.2025 19:24 β
π 6
π 4
π¬ 1
π 0
Asymptotic theory of in-context learning by linear attention | PNAS
Transformers have a remarkable ability to learn and execute tasks based on examples
provided within the input itself, without explicit prior traini...
Great to see this one finally out in PNAS! Asymptotic theory of in-context learning by linear attention www.pnas.org/doi/10.1073/... Many thanks to my amazing co-authors Yue Lu, Mary Letey, Jacob Zavatone-Veth @jzv.bsky.social and Anindita Maiti!
11.07.2025 07:33 β
π 23
π 5
π¬ 1
π 0
#eNeuro: Obeid and Miller identify distinct neural computations in the primary visual cortex that explain how surrounding context suppresses perception of visual figures and features. βͺ@harvardseas.bsky.socialβ¬
vist.ly/3n6tfb2
13.06.2025 23:32 β
π 5
π 2
π¬ 0
π 0