Adrian Haith

Adrian Haith

@adrianhaith.bsky.social

Motor Control and Motor Learning

524 Followers 534 Following 26 Posts Joined Sep 2024
1 month ago

While preparing your #NCMKobe26 abstract, read through the highlights from #NCMPan25! The meeting highlight article is now available for review.

Thank you to some of the scholarship winners from 2025 for putting the article together.

journals.physiology....

5 5 0 0
3 months ago
Post image

Faculty at Predominantly Undergraduate Institutions:
Your research matters and we want to see you at #NCMKobe26!

NCM offers a fellowship to support PUI faculty presenting at the meeting.

Up to $1,500 in travel support.

More info: ncm-society.org/dive...

1 2 0 0
4 months ago

Come share your passion about motor control, sensory systems, neurophysiology, neurotechnology, and more at #NCMKobe26 !!

15 3 0 0
4 months ago

Deadline to submit for oral presentations at #NCMKobe26 is in just under a month!

We especially welcome Team/Panel proposals: 2 hour session w 4 talks + discussion. Historically, the acceptance rate is higher for panels than for individual talks.

Poster submission closes in February.

5 2 0 0
4 months ago

Lastly, I should add -- I'm not the first to propose policy gradient RL for modeling human motor learning. Check out @nidhise.bsky.social 's excellent paper arguing for policy-gradient RL in locomotor learning:

2 1 0 0
4 months ago
Preview
Policy-Gradient Reinforcement Learning as a General Theory of Practice-Based Motor Skill Learning Mastering any new skill requires extensive practice, but the computational principles underlying this learning are not clearly understood. Existing theories of motor learning can explain short-term ad...

Our 2024 paper showed that policy gradient RL (with performance-based memory updates) predicts long-horizon motor learning. Now, @adrianhaith.bsky.social shows that policy-gradient RL also explains learning in other shorter horizon tasks. Exciting!

www.biorxiv.org/content/10.1...

13 2 1 0
4 months ago

I suspect we stand to learn more from the roboticists than the other way around. Progress there is accelerating rapidly! But maybe there will be insights that can go in the other direction too. I think it’s a very interesting direction to explore!

3 0 0 0
4 months ago

Yes. A lot of these challenges are being addressed already in robotics, with basically PG methods. I believe that ultimately policy gradient + smart/adaptive curriculum/reward function can get you pretty far. That latter part is the real role of cognition in skill learning.

3 0 1 0
4 months ago

I'm certainly not proposing this as a "Theory of Everything", but rather as an alternative foundation to error-based models of learning. Very feasible I hope to extend the theory in future to account for the kinds of things you mention. And some things may even make more sense from this perspective.

3 0 0 0
4 months ago

As for contextual interference, savings etc. Those things are increasingly viewed as occurring at the level of retrieval and/or separation of policies across tasks, rather than low-level learning rules. In that case, it's quite compatible with the underlying policy learning rule being model-free RL

4 0 1 0
4 months ago

Offline learning could be easily explained through replay - RL applications in robotics etc. do exactly this, and there's plenty of evidence something like this occurs during sleep. So it's very compatible with that.

1 0 1 0
4 months ago

Thanks, JJ. It doesn't directly predict those things. In our experiments, we actually haven't found there to be much forgetting of skills you learn through practice. People retain pretty much everything in our de novo task, even after a year: drive.google.com/file/d/11M0l...

1 0 1 0
4 months ago

Thanks, JT!

1 0 0 0
4 months ago

Please check out the paper for more details and hopefully an accessible intro to policy-gradient RL if you’re not familiar with it. I welcome any feedback.

If you’d like to dabble with the models, code for all simulations is available at: github.com/adrianhaith/PolicyGradientSkillLearning

/End.

8 0 3 0
4 months ago

I’m excited about the potential of this approach. Progress on understanding the kinds of motor learning that really matters for sports, rehab, and development, has been pretty limited in recent decades. I’m hopeful that a concrete and simple computational theory can help spur progress.

3 0 1 0
4 months ago
Post image

And in a precision movement task, requiring precise, speeded movements through an arc-shaped channel (Shmuelof, Krakauer & Mazzoni, 2012):

3 0 3 0
4 months ago
Post image Post image

In a cursor-control task with a highly non-intuitive mapping (often described as “de novo” learning, since it requires learning a brand new controller rather than adapting an existing one; Haith, Yang, Pakpoor & Kita, 2002):

3 0 1 0
4 months ago
Post image Post image

Across three quite different tasks, policy-gradient RL models of learning account very well for the patterns of improvement in the mean and variance of people’s actions across a range of tasks.

In Müller and Sternad’s skittles task (Sternad, Huber, Kuznetzov, 2014):

4 0 1 0
4 months ago
Post image

Policy-gradient RL is a simple, model-free RL method that is a pillar of impressive recent advances in robotics. Here, I show that a trial-by-trial learning rule based on policy-gradient RL accounts remarkably well for the way people improve at a skill through practice.

4 0 1 0
4 months ago
Preview
Policy-Gradient Reinforcement Learning as a General Theory of Practice-Based Motor Skill Learning Mastering any new skill requires extensive practice, but the computational principles underlying this learning are not clearly understood. Existing theories of motor learning can explain short-term ad...

New Pre-Print:
www.biorxiv.org/cgi/content/...

We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.

Overview 🧵 below...

63 22 3 3
6 months ago

New pre-print!

We attempt to survey the two different universes of motor learning research: basic (meetings like NCM, MLMC) and applied (e.g. NASPSA), and consider what these fields can learn from each other and what the future might look like if they can be better integrated.

More in Eric's 🧵 👇

13 5 0 0
6 months ago

I agree there are other cons that have a more subtle impact and will be harder to mitigate - like failure to properly cite or attribute ideas, or potentially leading everyone up the same path. Those are the ones to be concerned about, not so much AI hallucination/confabulation.

0 0 0 0
6 months ago

I would say a “bad” AI user is someone who uses it without being wary of its limitations and without properly validating its output. I expect most scientists to be capable of using AI with appropriate skepticism of its output.

0 0 2 0
6 months ago

Bad, lazy scientists are nothing new, but they are a small minority. I'm confident most of us will be able to use it wisely to make our science better

0 0 1 0
6 months ago

So many pros. Most 'cons' are avoidable by basic common sense: don't just blindly assume that what it outputs is correct or true. The alarmism seems to be all about how *other* people will use it - bad, lazy scientists using it to do bad, lazy science.

0 0 1 0
8 months ago
Preview
Cerebellar circuit computations for predictive motor control - Nature Reviews Neuroscience The cerebellum helps ensure the speed and accuracy of movements, but its precise contributions to movement control are unclear. Nguyen and Person here evaluate evidence for and against feedforward mot...

Cerebellar circuit computations for predictive motor control — a Review by Katrina P. Nguyen & Abigail L. Person

www.nature.com/articles/s41...

#neuroscience #neuroskyence

23 9 1 0
9 months ago

Very excited to share our new paper with @adrianhaith.bsky.social, now published in @nathumbehav.nature.com.

27 9 1 0
9 months ago
Preview
Mental graphs structure the storage and retrieval of visuomotor associations - Nature Human Behaviour Trach and McDougle show that motor responses can form part of structured, graph-like memory representations.

In this article, @jetrach.bsky.social and McDougle show that motor responses can form part of structured, graph-like memory representations. @actlab.bsky.social
www.nature.com/articles/s41...

26 8 2 0
9 months ago
Preview
Dissociable habits of response preparation versus response initiation - Nature Human Behaviour Du and Haith show that behaviour can become habitual in two different ways, involving response initiation and response preparation.

In this article, @yuedu.bsky.social and @adrianhaith.bsky.social show that behavior can become habitual in two different ways, involving response initiation and response preparation, respectively
www.nature.com/articles/s41...

26 13 1 1
10 months ago
Preview
Frontiers | Are muscle synergies useful for neural control? The observation that the activity of multiple muscles can be well approximated by a few linear synergies is viewed by some as a sign that such low-dimensiona...

I recommend this paper: www.frontiersin.org/journals/com...

1 0 1 0