Our new paper is out! When navigating through an environment, how do we combine our general sense of direction with known landmark states? To explore this, @denislan.bsky.social used a task that allowed subjects (or neural networks) to choose either their next action or next state at each step.
02.08.2025 08:37 β π 64 π 25 π¬ 0 π 0
NYU Application Support Group Matching Form
Dear Prospective Neuroscience & Psychology PhD Students,
We are a group of current NYU Neuroscience & Psychology PhD students who would like to help you with your PhD applications. We want to support...
Thinking of applying to US-based Ph.D. programs in neuroscience, psychology, or cognitive science? Weβve got your back! NYUβs Application Support Group is a student-led, free mentorship program offering 1-on-1 support and guidance. Apply now! docs.google.com/forms/d/e/1F...
28.07.2025 15:15 β π 12 π 13 π¬ 0 π 0
π¨ Fully Funded PhD positions
Gonda Brain Institute, Sharp Lab
We will explore how people build and deploy world models efficiently for planning and decision making. We will also seek to characterize how world models construal and use is biased in anxiety.
Deadline: 1 Sept 2025. Please share!
22.07.2025 08:48 β π 8 π 3 π¬ 0 π 1
π£ I'm looking for a postdoc to join my lab at NYU! Come work with me on a principled, theory-driven approach to studying language, learning, and reasoning, in humans and AI agents.
Apply here: apply.interfolio.com/170656
And come chat with me at #CogSci2025 if interested!
21.07.2025 22:28 β π 43 π 20 π¬ 1 π 1
Fantastic work by our (now former) lab manager Liv Christiano. We assess the test-retest reliability of OPM and compare it to fMRI and iEEG. π§ ππ§΅
19.07.2025 16:48 β π 27 π 8 π¬ 0 π 1
Reliability and signal comparison of OPM-MEG, fMRI & iEEG in a repeated movie viewing paradigm
Optically pumped magnetometers (OPMs) offer a promising advancement in noninvasive neuroimaging via magnetoencephalography (MEG), but establishing their reliability and comparability to existing metho...
How reliable is OPM-MEG, and how does it compare to other neuroimaging modalities? π€
In a new preprint with βͺ@s-michelmann.bsky.socialβ¬, we evaluate the reliability of OPM-MEG within & between individuals, and compare it to fMRI & iEEG during repeated movie viewing. π§
π doi.org/10.1101/2025...
19.07.2025 16:10 β π 30 π 14 π¬ 1 π 2
Beautiful work by @neurozz.bsky.social and @annaschapiro.bsky.social !
17.07.2025 00:01 β π 8 π 0 π¬ 1 π 0
A gradient of complementary learning systems emerges through meta-learning
Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...
Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortexβhippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5
16.07.2025 16:15 β π 61 π 24 π¬ 0 π 3
Very cool. Will the talk be recorded?
03.07.2025 14:08 β π 1 π 0 π¬ 1 π 0
Looks super interesting! Can't wait to read!
03.07.2025 00:41 β π 2 π 0 π¬ 0 π 0
YAY!!! I'm so happy for you, Anna! π Can't wait to see what your post-tenure research looks like! I'll be watching closely.
03.07.2025 00:32 β π 1 π 0 π¬ 1 π 0
Thanks!
And yes, the reason our tiny models did so well is that the tasks we studied were so simple (like most tasks studied in neuroscience!)
In more complex tasks, large models will definitely outperform, assuming you have enough data to train them.
02.07.2025 20:42 β π 5 π 0 π¬ 0 π 0
Also thrilled that this work appears in the same @nature issue as another paper I contributed to, Centaur! Like TinyRNNs, it excels at predicting human behavior. Impressively, Centaur works across many tasks, but this comes with a trade-off in model interpretability.
02.07.2025 19:03 β π 4 π 0 π¬ 1 π 0
Crucially, we can discover these patterns for each individual! This is a game-changer for computational psychiatry, as it lets us pinpoint exactly how cognitive processes differ across people, without the constraints of a prespecified learning/decision model like RL.
02.07.2025 19:03 β π 11 π 0 π¬ 1 π 0
This approach uncovered several patterns in the animal's decisions that previous models missed. Eg we found state-dependent learning rates, novel patterns of perseveration, and a peculiar type of forgetting where an action value decays towards the value of the alternative action.
02.07.2025 19:03 β π 6 π 0 π¬ 1 π 0
But prediction is only half the story; we also need interpretability! We viewed tiny RNNs as dynamical systems with inputs (observations/rewards) and outputs (actions). Given their small size, we could visualize how their states evolved and discover the strategies they learned.
02.07.2025 19:03 β π 3 π 0 π¬ 1 π 0
Across six different reward-learning tasks, tiny RNNs consistently outperform dozens of classical cognitive models in predicting the choices of individual animals and humans. Surprisingly, networks with just 2-4 units often performed the best in those simple lab tasks.
02.07.2025 19:03 β π 5 π 0 π¬ 1 π 0
Our solution was to use very small RNNs, composed of 1-4 units. Those models are still great at modeling biological behavior without the need for pre-specified assumptions, but are small enough for us to interpret their mechanisms, combining the best of both worlds.
02.07.2025 19:03 β π 5 π 0 π¬ 1 π 0
Cognitive modeling has always faced a dilemma. Traditional approaches based on normative or mechanistic assumptions are simple to understand but often cannot capture behavioral complexity. Large AI models are excellent predictors but are "black boxes" that are hard to interpret.
02.07.2025 19:03 β π 5 π 0 π¬ 1 π 0
Discovering cognitive strategies with tiny recurrent neural networks - Nature
Modelling biological decision-making with tiny recurrent neural networks enables more accurate predictions of animal choices than classical cognitive models and offers insights into the underlying cog...
Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...
02.07.2025 19:03 β π 319 π 137 π¬ 7 π 4
Yay, amazing news! Congrats Lucy!
28.06.2025 00:36 β π 1 π 0 π¬ 0 π 0
Excited to share our upcoming workshop on neuroscience, reinforcement learning, and decision making at RLDM 2025 in Dublin, Ireland β June 11β14!
Check out the terrific speaker lineup:
π sites.google.com/view/neurorl...
Co-organized with @angelaradulescu.bsky.social
@rldmdublin2025.bsky.social
29.05.2025 21:09 β π 26 π 10 π¬ 1 π 3
Neural dynamics of an extended frontal lobe network in goal-subgoal problem solving
Complex behavior calls for hierarchical representation of current state, goal, and component moves. In the human brain, a network of βmultiple-demandβ (MD) regions underpins cognitive control. We reco...
Now out π¨ π§ͺ : our preprint describing dynamics of an extended frontal lobe network (4 cortical regions) in monkeys solving complex multi-step spatial problems! We observe distributed codes for goals, states, and planned moves across PFC!
www.biorxiv.org/content/10.1...
#neuroscience #compneuro
π§΅π
29.05.2025 09:55 β π 42 π 11 π¬ 1 π 0
π’ I'm happy to share the preprint: _Reward-Aware Proto-Representations in Reinforcement Learning_ βΌοΈ
My PhD student, Hon Tik Tse, led this work, and my MSc student, Siddarth Chandrasekar, assisted us.
arxiv.org/abs/2505.16217
Basically, it's the SR with rewards. See below π
24.05.2025 15:23 β π 41 π 10 π¬ 2 π 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayesβ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled βmeta-learningβ combines Bayesian inference and neural networks into a βprior-trained neural networkβ, described as a neural network that has the priors of a Bayesian model β visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled βlearningβ goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence βcolorless green ideas sleep furiouslyβ).
π€π§ Paper out in Nature Communications! π§ π€
Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?
Our answer: Use meta-learning to distill Bayesian priors into a neural network!
www.nature.com/articles/s41...
1/n
20.05.2025 19:04 β π 154 π 43 π¬ 4 π 1
Congratulations, Fred! Fred has been an outstanding postdoc in my lab. He combines exceptional experimental and modeling skills and is a caring and attentive mentor. This is an excellent opportunity for prospective students and postdocs looking for a lab to join!
08.05.2025 12:59 β π 8 π 0 π¬ 0 π 0
Huge congratulations to Q on becoming Presidential Assistant Professor at City U Hong Kong! Q is truly one of the best researchers building neural network models of human memory. So excited to see all of the amazing things his lab will do!
08.05.2025 01:36 β π 11 π 0 π¬ 2 π 0
Neural reverse engineer, scientist at Meta Reality Labs, Adjunct Prof at Stanford.
Assistant Professor at Northwestern University, neuroscience, brain imaging, networks
Second-year Psych PhD at Stanford in the Causality in Cognition Lab!
Computational cognitive scientist at NYU. Founder of Growing up in Science.
PhD student @ Harvard || computational cognitive science, human decision making and reasoning
Brain imaging, machine learning, neuroscience, mental disorders
https://sites.google.com/view/yeolab
Professor of Neuroimaging Statistics
Oxford Big Data Institute
Nuffield Department of Population Health
University of Oxford
DPhil(PhD) student in #CognitiveNeuroscience @ox.ac.uk, @oxexppsy.bsky.social | B.S. from Peking University.
Website: https://www.psy.ox.ac.uk/people/deng-pan
Twitter: https://x.com/DengPan18
Natural and artificial general intelligence.
https://marcelbinz.github.io/
π§ π° π§ β π§ β π§
NRDlab: https://d-r-b-o-b.github.io/
CoCo Center: https://coco.psych.gatech.edu/
Comoutational neuroscientist, NeuroAI lab @EPFL
Mathematics/Music Composition Undergrad at Soochow Univ. in Taiwan
Computational/Theoretical Neuroscience RA at Academia Sinica
Interested in the interplay between memory and mental simulation, DeepRL & Philosophy of Neuroscience!!
Ex-Esports Coach (Apex)
NGP student at UCSD | Computational neuroscience | Neural networks | Marcelo Mattar Lab | Marcus Benna Lab
postdoc/lecturer at PrincetonδΈ¨he/himδΈ¨semiprofessional dungeon masterδΈ¨https://snastase.github.io/
Cognitive neuroscientist and AI researcher
Cognitive Scientist/Geographer. Associate Professor at Bond University & PI at the Singapore-ETH Centre. VR, digital health, navigation and ageing.
Co-founder of Project Implicit, Society for Improving Psychological Science, and the Center for Open Science; Professor at the University of Virginia
Y. Eva Tan Professor in Neurotechnology, MIT. Investigator, HHMI. Leader, Synthetic Neurobiology Group, http://synthneuro.org. Scientist, inventor, entrepreneur.
Official account of the NYU Center for Data Science, the home of the Undergraduate, Masterβs, and Ph.D. programs in data science. cds.nyu.edu