While preparing your #NCMKobe26 abstract, read through the highlights from #NCMPan25! The meeting highlight article is now available for review.
Thank you to some of the scholarship winners from 2025 for putting the article together.
journals.physiology....
Faculty at Predominantly Undergraduate Institutions:
Your research matters and we want to see you at #NCMKobe26!
NCM offers a fellowship to support PUI faculty presenting at the meeting.
Up to $1,500 in travel support.
More info: ncm-society.org/dive...
Come share your passion about motor control, sensory systems, neurophysiology, neurotechnology, and more at #NCMKobe26 !!
Deadline to submit for oral presentations at #NCMKobe26 is in just under a month!
We especially welcome Team/Panel proposals: 2 hour session w 4 talks + discussion. Historically, the acceptance rate is higher for panels than for individual talks.
Poster submission closes in February.
Lastly, I should add -- I'm not the first to propose policy gradient RL for modeling human motor learning. Check out @nidhise.bsky.social 's excellent paper arguing for policy-gradient RL in locomotor learning:
Our 2024 paper showed that policy gradient RL (with performance-based memory updates) predicts long-horizon motor learning. Now, @adrianhaith.bsky.social shows that policy-gradient RL also explains learning in other shorter horizon tasks. Exciting!
www.biorxiv.org/content/10.1...
I suspect we stand to learn more from the roboticists than the other way around. Progress there is accelerating rapidly! But maybe there will be insights that can go in the other direction too. I think it’s a very interesting direction to explore!
Yes. A lot of these challenges are being addressed already in robotics, with basically PG methods. I believe that ultimately policy gradient + smart/adaptive curriculum/reward function can get you pretty far. That latter part is the real role of cognition in skill learning.
I'm certainly not proposing this as a "Theory of Everything", but rather as an alternative foundation to error-based models of learning. Very feasible I hope to extend the theory in future to account for the kinds of things you mention. And some things may even make more sense from this perspective.
As for contextual interference, savings etc. Those things are increasingly viewed as occurring at the level of retrieval and/or separation of policies across tasks, rather than low-level learning rules. In that case, it's quite compatible with the underlying policy learning rule being model-free RL
Offline learning could be easily explained through replay - RL applications in robotics etc. do exactly this, and there's plenty of evidence something like this occurs during sleep. So it's very compatible with that.
Thanks, JJ. It doesn't directly predict those things. In our experiments, we actually haven't found there to be much forgetting of skills you learn through practice. People retain pretty much everything in our de novo task, even after a year: drive.google.com/file/d/11M0l...
Thanks, JT!
Please check out the paper for more details and hopefully an accessible intro to policy-gradient RL if you’re not familiar with it. I welcome any feedback.
If you’d like to dabble with the models, code for all simulations is available at: github.com/adrianhaith/PolicyGradientSkillLearning
/End.
I’m excited about the potential of this approach. Progress on understanding the kinds of motor learning that really matters for sports, rehab, and development, has been pretty limited in recent decades. I’m hopeful that a concrete and simple computational theory can help spur progress.
And in a precision movement task, requiring precise, speeded movements through an arc-shaped channel (Shmuelof, Krakauer & Mazzoni, 2012):
In a cursor-control task with a highly non-intuitive mapping (often described as “de novo” learning, since it requires learning a brand new controller rather than adapting an existing one; Haith, Yang, Pakpoor & Kita, 2002):
Across three quite different tasks, policy-gradient RL models of learning account very well for the patterns of improvement in the mean and variance of people’s actions across a range of tasks.
In Müller and Sternad’s skittles task (Sternad, Huber, Kuznetzov, 2014):
Policy-gradient RL is a simple, model-free RL method that is a pillar of impressive recent advances in robotics. Here, I show that a trial-by-trial learning rule based on policy-gradient RL accounts remarkably well for the way people improve at a skill through practice.
New Pre-Print:
www.biorxiv.org/cgi/content/...
We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.
Overview 🧵 below...
New pre-print!
We attempt to survey the two different universes of motor learning research: basic (meetings like NCM, MLMC) and applied (e.g. NASPSA), and consider what these fields can learn from each other and what the future might look like if they can be better integrated.
More in Eric's 🧵 👇
I agree there are other cons that have a more subtle impact and will be harder to mitigate - like failure to properly cite or attribute ideas, or potentially leading everyone up the same path. Those are the ones to be concerned about, not so much AI hallucination/confabulation.
I would say a “bad” AI user is someone who uses it without being wary of its limitations and without properly validating its output. I expect most scientists to be capable of using AI with appropriate skepticism of its output.
Bad, lazy scientists are nothing new, but they are a small minority. I'm confident most of us will be able to use it wisely to make our science better
So many pros. Most 'cons' are avoidable by basic common sense: don't just blindly assume that what it outputs is correct or true. The alarmism seems to be all about how *other* people will use it - bad, lazy scientists using it to do bad, lazy science.
Cerebellar circuit computations for predictive motor control — a Review by Katrina P. Nguyen & Abigail L. Person
www.nature.com/articles/s41...
#neuroscience #neuroskyence
Very excited to share our new paper with @adrianhaith.bsky.social, now published in @nathumbehav.nature.com.
In this article, @jetrach.bsky.social and McDougle show that motor responses can form part of structured, graph-like memory representations. @actlab.bsky.social
www.nature.com/articles/s41...
In this article, @yuedu.bsky.social and @adrianhaith.bsky.social show that behavior can become habitual in two different ways, involving response initiation and response preparation, respectively
www.nature.com/articles/s41...