Olivier Codol's Avatar

Olivier Codol

@oliviercodol.bsky.social

Neuroscience, RL for motor learning, neural control of movement, NeuroAI. Opinions stated here are my own, not those of my company.

1,543 Followers  |  253 Following  |  139 Posts  |  Joined: 28.09.2023  |  2.5566

Latest posts by oliviercodol.bsky.social on Bluesky

Join us for Fall 2026. In our group, you can run studies from human behavior and neuroimaging, to large-scale NHP ephys, and join them up with a robust computational foundation. Bonus: you can help build the reading list.

02.12.2025 13:23 β€” πŸ‘ 36    πŸ” 29    πŸ’¬ 1    πŸ“Œ 1

Wow and in winter, which is even more beautiful!

25.11.2025 01:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Diedrichsenlab

The Sensorimotor Superlab with @gribblelab.org and @andpru.bsky.social is a unique place to work and learn. We are now accepting MSc and PhD applications for Fall 2026. Join our awesome team at Western University... For application instructions see diedrichsenlab.org and gribblelab.org/join.html!

24.11.2025 22:50 β€” πŸ‘ 29    πŸ” 24    πŸ’¬ 0    πŸ“Œ 1

Come share your passion about motor control, sensory systems, neurophysiology, neurotechnology, and more at #NCMKobe26 !!

13.11.2025 00:03 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

As always, thank you to my kind friends and mentors along the way, who make my journey not only possible but also fun and fulfilling.

12.11.2025 17:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In my free time, I am wrapping up (a lot of) work and projects with former colleagues and friends. I will be communicating these as they come, so stay tuned!

12.11.2025 17:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

While I'm sad to step away from my full-time academic work, the first few months have been fantasticβ€”I'm enjoying doing exciting research at the scale possible in such an ambitious team and company. There's a lot to learn and I'm grateful for my inclusive colleagues enabling this experience.

12.11.2025 17:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
The view a Research Scientist may enjoy running in Central Park

The view a Research Scientist may enjoy running in Central Park

Happy to announce that as of this summer, I've joined the CTRL-Labs group at Meta Reality Labs as a Research Scientist! I've also relocated to the bustling city of New York, where I hope I can do my best work (and enjoy running in Central Park).

12.11.2025 17:12 β€” πŸ‘ 22    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Post image

Our next paper on comparing dynamical systems (with special interest to artificial and biological neural networks) is out!! Joint work with @annhuang42.bsky.social , as well as @satpreetsingh.bsky.social , @leokoz8.bsky.social , Ila Fiete, and @kanakarajanphd.bsky.social : arxiv.org/pdf/2510.25943

10.11.2025 16:16 β€” πŸ‘ 67    πŸ” 23    πŸ’¬ 4    πŸ“Œ 3
Preview
Postdoc Position on Systems Neuroscience, Motor Control at UCLouvain, Belgium Project Title: Multi-disciplinary, multi-lab investigations of the neural bases of human sensorimotor control Project Description:

🚨🚨 We're hiring !! Looking for postdoc? Come work in an international, collaborative and stimulating environment on mechanisms of human upper limb motor control
πŸ‘‡πŸ‘‡πŸ‘‡
euraxess.ec.europa.eu/jobs/386645

10.11.2025 10:31 β€” πŸ‘ 6    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

A very nice contribution to the field, adding more evidence on how our expectations and goals shape upcoming motor commands.

Congrats to the wonderful team!

07.11.2025 13:23 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
AI and Neuroscience | IVADO

I’m looking for interns to join our lab for a project on foundation models in neuroscience.

Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).

Interested? See the details in the comments. (1/3)

πŸ§ πŸ€–

07.11.2025 13:52 β€” πŸ‘ 43    πŸ” 23    πŸ’¬ 1    πŸ“Œ 0

Yes! The advantages are much clearer wrt neural computation (memory, expressivity, and gradient propagation) than for exploration per se.

07.11.2025 04:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Learning through motor noise (exploration) is well documented in humans (lots of cool work from Shadmehr and @olveczky.bsky.social) but the scale is rather small. Here if the dynamical regime helps exploration I’d say it should be within these scales as well.

07.11.2025 03:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

That being said this is not how we move (execute movements) and in that sense this is a model of learning rather control.

07.11.2025 03:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I would say yes it’s possible. Particularly because a deviation is carried over instead of collapsing back, so the filtering function that non linear muscle activations have will not impact it as much as white noise.

07.11.2025 03:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

As in if the edge of chaos regime is a consequence or if it is a cause of RL’s need for exploration?

07.11.2025 03:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you're interested in dynamical systems analysis for neuroscience, definitely check out @oliviercodol.bsky.social 's revised version of our RL paper! Very cool results in the new Fig 6, worth it regardless of if you saw our previous version or if it's all new.

www.biorxiv.org/content/10.1...

06.11.2025 17:58 β€” πŸ‘ 37    πŸ” 11    πŸ’¬ 0    πŸ“Œ 0

As always a huge thank you to my colleagues and supervisors @glajoie.bsky.social @mattperich.bsky.social and @nandahkrishna.bsky.social for helping make this work what it isβ€”and making the journey so fun and interesting

06.11.2025 02:13 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We’re pleased to see RL's role in neural plasticity is increasingly under focus in the motor control community (check out @adrianhaith.bsky.social's latest piece!)
I strongly believe motor learning is sitting at the interface of many plasticity mechanisms and RL is an important piece of this puzzle.

06.11.2025 02:09 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Along the above, we add discussion points that I hope will clarify some of our stance on the topic of RL in neuroscience and acknowledge some past important work that we believe our study complements. We also add several important controls (particularly Figs. S8, S14). Feel free to check it all out!

06.11.2025 02:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

β€œEdge of chaos” dynamics are long recognized as a computationally potent dynamical regime that avoids vanishing gradients during learning and allows greater memory and expressivity of a system. This stark difference surprised us, and we think it can help explain our results on neural adaptation.

06.11.2025 02:09 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

Indeed, Lyapunov exponents show that fixed points for RL models largely stay near 0, showing these networks’ dynamics lie at the edge of chaos. Whereas SL models’ dynamics are contractive and orderly, keeping very little information in memory for long and having stereotyped expressivity.

06.11.2025 02:09 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Does this mean SL models are very orderly, while RL models lie at the interface between order and chaos? To formally confirm, we looked at Lyapunov exponents, which tell us how fast close-by states diverge. Unlike Jacobians, this tells us about long-horizon, not just local, dynamics.

06.11.2025 02:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We looked at local dynamics around fixed points over time. This showed that SL models’ fixed points are indeed very stable, having nearly all modes of their eigenspectrum <1. RL models showed many more self-sustaining modes β‰ˆ1, again demonstrating isometric dynamics.

06.11.2025 02:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

A dynamical system could recover perfectly against a state perturbation, or it could expand following that perturbation. It turns out supervised learning (SL) models do the former, while reinforcement learning (RL) models do something in-between; they act as isometric systems.

06.11.2025 02:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

But a biological brain receives an ever-changing stream of inputs, rarely ever reducing to steady-state inputs. Our models reflect that, and their inputs are time varying.

So we took a slightly different approach, and asked how fixed-points evolved over time and over perturbed neural states.

06.11.2025 02:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Usually, one determines where neural activity naturally settles under a steady-state input regime to find β€œfixed-point” neural states. Local dynamics around these points provides valuable information about how neural networks process informationβ€”that is, what they compute, and how.

06.11.2025 02:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But alignment metrics can overlook the question of what gives rise to the differences they capture. We approached this using a now established framework in systems neuroscience, dynamical systems theory.

06.11.2025 02:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This similarity to NHP neural recordings was true for geometric similarity metrics (CCA), but also for dynamical similarity. Importantly, this was only evident when our models were trained to control biomechanistically realistic effectors.

06.11.2025 02:09 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@oliviercodol is following 20 prominent accounts