Mark Burrell's Avatar

Mark Burrell

@mhburrell.bsky.social

36 Followers  |  28 Following  |  12 Posts  |  Joined: 25.11.2024  |  1.6844

Latest posts by mhburrell.bsky.social on Bluesky

🚨Our preprint is online!🚨

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! 🧡

19.09.2025 13:05 β€” πŸ‘ 190    πŸ” 67    πŸ’¬ 10    πŸ“Œ 3

Read more about it and see our smiling faces: https://www.mcb.harvard.edu/department/news/decoding-learning-how-cues-and-rewards-shape-behavior-and-dopamine-signals/ @harvardmcb.bsky.social (12/12)

18.03.2025 15:04 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This work is a product of a tremendous team: Lechen (Selina) Qian, Jay A Hennig (@jhennig.bsky.social), Sara Matias (@saramatias.bsky.social), Venki Murthy (@neurovenki.bsky.social), Sam Gershman (@gershbrain.bsky.social) and Naoshige Uchida (@naoshigeuchida.bsky.social)
(11/12)

18.03.2025 15:04 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

We thank the reviewers for their comments, who helped us refine our explanations of the various models and why they succeed or fail.
(10/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In total, we show how TD learning can be used as a comprehensive explanation of the effects of contingency on associative learning and discuss how this guides our future study of how the brain learns causality. (9/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Finally, we showed a novel model that relies on retrospective contingency, ANCCR (doi:10.1126/science.abq6740), does not explain our results under any parameter combination. (8/12)

18.03.2025 15:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Moreover, working with @jhennig.bsky.social, we showed small RNNS develop similar state space representations and explain our results in the same manner. (7/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We sought to identify a TD model to explain these changes. While several classic implementations of TD did not working (e.g. CSC & microstimuli), with a state representation that incorporated the animal’s learned knowledge of the task structure, TD was able to explain all our results. (6/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Another group, received the same increase in the number of rewards, but new rewards were preceded by a novel cue. This important control reveals that a classic definition of contingency βˆ†π‘ƒ does not adequately describe the pattern of changes in dopamine and behavior. (5/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We performed a Pavlovian contingency degradation task to examine how behavior and dopamine activities are modulated in contingency learning. In this task, mice were first trained in a simple conditioning task. Then one group of mice received both cued and uncued rewards. (4/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Contingency, the degree to which a stimulus predicts an outcome, is a critical factor in shaping animal behavior during associative learning. But the neural mechanisms linking contingency to behavior have been elusive. (3/12)

18.03.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In short, we found that we can explain the effects of contingency on both an animal’s behavior and ventral striatum dopamine response using temporal difference learning when equipped with appropriate state space representations. (2/12)

18.03.2025 15:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I’m happy to share our latest work (co-lead by Selina Qian) has today been published in its final form in @:natureneuro.bsky.social: Read here: https://www.nature.com/articles/s41593-025-01915-4
(1/12)

18.03.2025 15:04 β€” πŸ‘ 30    πŸ” 14    πŸ’¬ 2    πŸ“Œ 2

@mhburrell is following 20 prominent accounts