Marcelo Mattar's Avatar

Marcelo Mattar

@marcelomattar.bsky.social

Assistant professor at NYU.

2,070 Followers  |  323 Following  |  40 Posts  |  Joined: 16.10.2023  |  2.2055

Latest posts by marcelomattar.bsky.social on Bluesky

Our new paper is out! When navigating through an environment, how do we combine our general sense of direction with known landmark states? To explore this, @denislan.bsky.social used a task that allowed subjects (or neural networks) to choose either their next action or next state at each step.

02.08.2025 08:37 β€” πŸ‘ 64    πŸ” 25    πŸ’¬ 0    πŸ“Œ 0
Preview
NYU Application Support Group Matching Form Dear Prospective Neuroscience & Psychology PhD Students, We are a group of current NYU Neuroscience & Psychology PhD students who would like to help you with your PhD applications. We want to support...

Thinking of applying to US-based Ph.D. programs in neuroscience, psychology, or cognitive science? We’ve got your back! NYU’s Application Support Group is a student-led, free mentorship program offering 1-on-1 support and guidance. Apply now! docs.google.com/forms/d/e/1F...

28.07.2025 15:15 β€” πŸ‘ 12    πŸ” 13    πŸ’¬ 0    πŸ“Œ 0
Post image

🚨 Fully Funded PhD positions

Gonda Brain Institute, Sharp Lab

We will explore how people build and deploy world models efficiently for planning and decision making. We will also seek to characterize how world models construal and use is biased in anxiety.

Deadline: 1 Sept 2025. Please share!

22.07.2025 08:48 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

πŸ“£ I'm looking for a postdoc to join my lab at NYU! Come work with me on a principled, theory-driven approach to studying language, learning, and reasoning, in humans and AI agents.
Apply here: apply.interfolio.com/170656
And come chat with me at #CogSci2025 if interested!

21.07.2025 22:28 β€” πŸ‘ 43    πŸ” 20    πŸ’¬ 1    πŸ“Œ 1

Fantastic work by our (now former) lab manager Liv Christiano. We assess the test-retest reliability of OPM and compare it to fMRI and iEEG. πŸ§ πŸ“„πŸ§΅

19.07.2025 16:48 β€” πŸ‘ 27    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
Preview
Reliability and signal comparison of OPM-MEG, fMRI & iEEG in a repeated movie viewing paradigm Optically pumped magnetometers (OPMs) offer a promising advancement in noninvasive neuroimaging via magnetoencephalography (MEG), but establishing their reliability and comparability to existing metho...

How reliable is OPM-MEG, and how does it compare to other neuroimaging modalities? πŸ€”

In a new preprint with β€ͺ@s-michelmann.bsky.social‬, we evaluate the reliability of OPM-MEG within & between individuals, and compare it to fMRI & iEEG during repeated movie viewing. 🧠

πŸ“„ doi.org/10.1101/2025...

19.07.2025 16:10 β€” πŸ‘ 30    πŸ” 14    πŸ’¬ 1    πŸ“Œ 2

Beautiful work by @neurozz.bsky.social and @annaschapiro.bsky.social !

17.07.2025 00:01 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
A gradient of complementary learning systems emerges through meta-learning Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...

Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortex–hippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5

16.07.2025 16:15 β€” πŸ‘ 61    πŸ” 24    πŸ’¬ 0    πŸ“Œ 3
Preview
Considering What We Know and What We Don’t Know: Expectations and Confidence Guide Value Integration in Value-Based Decision-Making Abstract. When making decisions, we often have more information about some options than others. Previous work has shown that people are more likely to choose options that they look at more and those t...

Our newest paper, led by Romy Froemer
and @fredcallaway.bsky.social‬, is now out in Open Mind: β€œConsidering What We Know and What We Don’t Know: Expectations and Confidence Guide Value Integration in Value-Based Decision-Making”

direct.mit.edu/opmi/article...

10.07.2025 16:43 β€” πŸ‘ 25    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
What Paradigms Can Webcam Eye-Tracking Be Used For? Attempted Replications of Five Cognitive Science Experiments Web-based data collection allows researchers to recruit large and diverse samples with fewer resources than lab-based studies require. Recent innovations have expanded the set of methodolgies that are...

Want to know what kinds of studies webcam-based eye tracking can be used for? Here's our take on the current tech. This certainly isn't the first paper on this topic, but it provides some converging evidence about the viability of eye tracking with online methods. online.ucpress.edu/collabra/art...

08.07.2025 19:57 β€” πŸ‘ 20    πŸ” 4    πŸ’¬ 1    πŸ“Œ 2

Very cool. Will the talk be recorded?

03.07.2025 14:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Looks super interesting! Can't wait to read!

03.07.2025 00:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

YAY!!! I'm so happy for you, Anna! πŸŽ‰ Can't wait to see what your post-tenure research looks like! I'll be watching closely.

03.07.2025 00:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks!

And yes, the reason our tiny models did so well is that the tasks we studied were so simple (like most tasks studied in neuroscience!)

In more complex tasks, large models will definitely outperform, assuming you have enough data to train them.

02.07.2025 20:42 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Discovering cognitive strategies with tiny recurrent neural networks - Nature Modelling biological decision-making with tiny recurrent neural networks enables more accurate predictions of animal choices than classical cognitive models and offers insights into the underlying cog...

Here's the link to our paper: doi.org/10.1038/s415...
Here's the link to the Centaur paper: www.nature.com/articles/s41...
And here's Marcel Binz' thread on Centaur: bsky.app/profile/marc...

02.07.2025 19:03 β€” πŸ‘ 12    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Also thrilled that this work appears in the same @nature issue as another paper I contributed to, Centaur! Like TinyRNNs, it excels at predicting human behavior. Impressively, Centaur works across many tasks, but this comes with a trade-off in model interpretability.

02.07.2025 19:03 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Crucially, we can discover these patterns for each individual! This is a game-changer for computational psychiatry, as it lets us pinpoint exactly how cognitive processes differ across people, without the constraints of a prespecified learning/decision model like RL.

02.07.2025 19:03 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This approach uncovered several patterns in the animal's decisions that previous models missed. Eg we found state-dependent learning rates, novel patterns of perseveration, and a peculiar type of forgetting where an action value decays towards the value of the alternative action.

02.07.2025 19:03 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

But prediction is only half the story; we also need interpretability! We viewed tiny RNNs as dynamical systems with inputs (observations/rewards) and outputs (actions). Given their small size, we could visualize how their states evolved and discover the strategies they learned.

02.07.2025 19:03 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Across six different reward-learning tasks, tiny RNNs consistently outperform dozens of classical cognitive models in predicting the choices of individual animals and humans. Surprisingly, networks with just 2-4 units often performed the best in those simple lab tasks.

02.07.2025 19:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our solution was to use very small RNNs, composed of 1-4 units. Those models are still great at modeling biological behavior without the need for pre-specified assumptions, but are small enough for us to interpret their mechanisms, combining the best of both worlds.

02.07.2025 19:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Cognitive modeling has always faced a dilemma. Traditional approaches based on normative or mechanistic assumptions are simple to understand but often cannot capture behavioral complexity. Large AI models are excellent predictors but are "black boxes" that are hard to interpret.

02.07.2025 19:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Discovering cognitive strategies with tiny recurrent neural networks - Nature Modelling biological decision-making with tiny recurrent neural networks enables more accurate predictions of animal choices than classical cognitive models and offers insights into the underlying cog...

Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...

02.07.2025 19:03 β€” πŸ‘ 319    πŸ” 137    πŸ’¬ 7    πŸ“Œ 4

Yay, amazing news! Congrats Lucy!

28.06.2025 00:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Excited to share our upcoming workshop on neuroscience, reinforcement learning, and decision making at RLDM 2025 in Dublin, Ireland β€” June 11–14!

Check out the terrific speaker lineup:

πŸ”— sites.google.com/view/neurorl...

Co-organized with @angelaradulescu.bsky.social

@rldmdublin2025.bsky.social

29.05.2025 21:09 β€” πŸ‘ 26    πŸ” 10    πŸ’¬ 1    πŸ“Œ 3
Preview
Neural dynamics of an extended frontal lobe network in goal-subgoal problem solving Complex behavior calls for hierarchical representation of current state, goal, and component moves. In the human brain, a network of β€œmultiple-demand” (MD) regions underpins cognitive control. We reco...

Now out 🚨 πŸ§ͺ : our preprint describing dynamics of an extended frontal lobe network (4 cortical regions) in monkeys solving complex multi-step spatial problems! We observe distributed codes for goals, states, and planned moves across PFC!

www.biorxiv.org/content/10.1...
#neuroscience #compneuro

πŸ§΅πŸ‘‡

29.05.2025 09:55 β€” πŸ‘ 42    πŸ” 11    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“’ I'm happy to share the preprint: _Reward-Aware Proto-Representations in Reinforcement Learning_ ‼️

My PhD student, Hon Tik Tse, led this work, and my MSc student, Siddarth Chandrasekar, assisted us.

arxiv.org/abs/2505.16217

Basically, it's the SR with rewards. See below πŸ‘‡

24.05.2025 15:23 β€” πŸ‘ 41    πŸ” 10    πŸ’¬ 2    πŸ“Œ 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled β€œmeta-learning” combines Bayesian inference and neural networks into a β€œprior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled β€œlearning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence β€œcolorless green ideas sleep furiously”).

A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled β€œmeta-learning” combines Bayesian inference and neural networks into a β€œprior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled β€œlearning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence β€œcolorless green ideas sleep furiously”).

πŸ€–πŸ§  Paper out in Nature Communications! πŸ§ πŸ€–

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n

20.05.2025 19:04 β€” πŸ‘ 154    πŸ” 43    πŸ’¬ 4    πŸ“Œ 1

Congratulations, Fred! Fred has been an outstanding postdoc in my lab. He combines exceptional experimental and modeling skills and is a caring and attentive mentor. This is an excellent opportunity for prospective students and postdocs looking for a lab to join!

08.05.2025 12:59 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Huge congratulations to Q on becoming Presidential Assistant Professor at City U Hong Kong! Q is truly one of the best researchers building neural network models of human memory. So excited to see all of the amazing things his lab will do!

08.05.2025 01:36 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

@marcelomattar is following 20 prominent accounts