Matthijs Pals's Avatar

Matthijs Pals

@matthijspals.bsky.social

Using deep learning to study neural dynamics @mackelab.bsky.social

931 Followers  |  617 Following  |  19 Posts  |  Joined: 14.11.2024
Posts Following

Posts by Matthijs Pals (@matthijspals.bsky.social)

Post image Post image

Postdoc position in Paris: come help develop new generation human brain computer interfaces โšก๐Ÿง ๐Ÿ’ป

Interested? Contact me if you have experience with machine learning (e.g. simulation-based inference, RL, generative/diffusion models) or dynamical systems.

See below for + details and retweet ๐Ÿ™

27.01.2026 22:12 โ€” ๐Ÿ‘ 75    ๐Ÿ” 56    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 5
Diagram of a recurrent neural network: input goes into the network, output is compared to a target to produce an error, and dotted feedback arrows show updates to neural activity and to synaptic weights.

Diagram of a recurrent neural network: input goes into the network, output is compared to a target to produce an error, and dotted feedback arrows show updates to neural activity and to synaptic weights.

1/7 How should feedback signals influence a network during learning? Should they first adjust synaptic weights, which then indirectly change neural activity (as in backprop.)? Or should they first adjust neural activity to guide synaptic updates (e.g., target prop.)? openreview.net/forum?id=xVI...

08.01.2026 22:10 โ€” ๐Ÿ‘ 40    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thanks for the insightful response! I see how multiple populations overcome the limit of shared gain - yet in data there can be large overlaps in units tracking different variables simultaneously. And yes, maybe one shouldnโ€™t think of separate tasks (e.g., two rings), but rather one task (one torus)

20.12.2025 13:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

In some way this could be seen as doing multiple tasks at the same time (e.g., 3 ring attractors for storing three angular variables). Do you have any idea or speculation on how to extend your framework to this setting? Thanks! โ€จ2/2

19.12.2025 15:23 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Hi, great work and nicely written paper! It seems here there is at most one task active at a given time. It has been shown that macaques can memorise multiple stimuli at the same time in (not perfectly) orthogonal subspaces using overlapping populations of units pubmed.ncbi.nlm.nih.gov/39178858/ 1/2

19.12.2025 15:22 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Our paper on data constrained RNN that generalize to optogenetic perturbations now citable on eLife:
doi.org/10.7554/eLif...

18.12.2025 23:07 โ€” ๐Ÿ‘ 42    ๐Ÿ” 18    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Post image

Finally got the job adโ€”looking for 2 PhD students to start spring next year:

www.gao-unit.com/join-us/

If comp neuro, ML, and AI4Neuro is your thing, or you just nerd out over brain recordings, apply!

I'm at neurips. DM me here / on the conference app or email if you want to meet ๐Ÿ–๏ธ๐ŸŒฎ

03.12.2025 09:36 โ€” ๐Ÿ‘ 81    ๐Ÿ” 51    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 5
Jobs - mackelab The MackeLab is a research group at the Excellence Cluster Machine Learning at Tรผbingen University!

We are looking for a Research Engineer (E13 TV-L) to work at the intersection of #ML and #compneuro! ๐Ÿค–๐Ÿง 

Help us build large-scale bio-inspired neural networks, write high-quality research code, and contribute to open-source tools like jaxley, sbi, and flyvis ๐Ÿชฐ.

More info: www.mackelab.org/jobs/

28.11.2025 13:54 โ€” ๐Ÿ‘ 13    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

MackeLab has grown! ๐ŸŽ‰ Warm welcome to 5(!) brilliant and fun new PhD students / research scientists who joined our lab in the past year โ€” we canโ€™t wait to do great science and already have good times together! ๐Ÿค–๐Ÿง  Meet them in the thread ๐Ÿ‘‡ 1/7

28.11.2025 10:26 โ€” ๐Ÿ‘ 19    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
Jaxley: differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics - Nature Methods Jaxley is a versatile platform for biophysical modeling in neuroscience. It allows efficiently simulating large-scale biophysical models on CPUs, GPUs and TPUs. Model parameters can be optimized with ...

I am super happy to share that our project on training biophysical models with Jaxley is now published in Nature Methods: www.nature.com/articles/s41...

13.11.2025 12:38 โ€” ๐Ÿ‘ 78    ๐Ÿ” 16    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
Preview
Jaxley: differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics - Nature Methods Jaxley is a versatile platform for biophysical modeling in neuroscience. It allows efficiently simulating large-scale biophysical models on CPUs, GPUs and TPUs. Model parameters can be optimized with ...

Our work on training biophysical models with Jaxley is now out in @natmethods.nature.com. Led by @deismic.bsky.social, with @philipp.hertie.ai, @ppjgoncalves.bsky.social & @jakhmack.bsky.social et al.

Paper: www.nature.com/articles/s41...

13.11.2025 13:08 โ€” ๐Ÿ‘ 48    ๐Ÿ” 25    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4

Really cool work! ๐Ÿ”ฅ

02.10.2025 13:04 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The Macke lab is well-represented at the @bernsteinneuro.bsky.social conference in Frankfurt this year! We have lots of exciting new work to present with 7 posters (details๐Ÿ‘‡) 1/9

30.09.2025 14:06 โ€” ๐Ÿ‘ 30    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
a man wearing a white shirt and tie smiles in front of a window ALT: a man wearing a white shirt and tie smiles in front of a window

I've been waiting some years to make this joke and now itโ€™s real:

I conned somebody into giving me a faculty job!

Iโ€™m starting as a W1 Tenure-Track Professor at Goethe University Frankfurt in a week (lol), in the Faculty of CS and Math

and I'm recruiting PhD students ๐Ÿค—

23.09.2025 12:58 โ€” ๐Ÿ‘ 188    ๐Ÿ” 31    ๐Ÿ’ฌ 30    ๐Ÿ“Œ 3

Our #AI #DynamicalSystems #FoundationModel DynaMix was accepted to #NeurIPS2025 with outstanding reviews (6555) โ€“ first model which can *zero-shot*, w/o any fine-tuning, forecast the *long-term statistics* of time series provided a context. Test it on #HuggingFace:
huggingface.co/spaces/Durst...

21.09.2025 09:40 โ€” ๐Ÿ‘ 12    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

From hackathon to release: sbi v0.25 is here! ๐ŸŽ‰

What happens when dozens of SBI researchers and practitioners collaborate for a week? New inference methods, new documentation, lots of new embedding networks, a bridge to pyro and a bridge between flow matching and score-based methods ๐Ÿคฏ

1/7 ๐Ÿงต

09.09.2025 15:00 โ€” ๐Ÿ‘ 29    ๐Ÿ” 16    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.

13.07.2025 06:23 โ€” ๐Ÿ‘ 34    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Vacatures bij de RUG

Jelmer Borst and I are looking for a PhD candidate to build an EEG-based model of human working memory! This is a really cool project that I've wanted to kick off for a while, and I can't wait to see it happen. Please share and I'm happy to answer any Qs about the project!
www.rug.nl/about-ug/wor...

03.07.2025 13:29 โ€” ๐Ÿ‘ 15    ๐Ÿ” 21    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
Null and Noteworthy: Neurons tracking sequences donโ€™t fire in order Instead, neurons encode the position of sequential items in working memory based on when they fire during ongoing brain wave oscillationsโ€”a finding that challenges a long-standing theory.

The neurons that encode sequential information into working memory do not fire in that same order during recall, a finding that is at odds with a long-standing theory. Read more in this monthโ€™s Null and Noteworthy.

By @ldattaro.bsky.social

#neuroskyence

www.thetransmitter.org/null-and-not...

30.06.2025 16:08 โ€” ๐Ÿ‘ 42    ๐Ÿ” 19    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Abstract rule learning promotes cognitive flexibility in complex environments across species Nature Communications - Whether neurocomputational mechanisms that speed up human learning in changing environments also exist in other species remains unclear. Here, the authors show that both...

How do animals learn new rules? By systematically testing diff. behavioral strategies, guided by selective attn. to rule-relevant cues: rdcu.be/etlRV
Akin to in-context learning in AI, strategy selection depends on the animals' "training set" (prior experience), with similar repr. in rats & humans.

26.06.2025 15:30 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Out today in @nature.com: we show that individual neurons have diverse tuning to a decision variable computed by the entire population, revealing a unifying geometric principle for the encoding of sensory and dynamic cognitive variables.
www.nature.com/articles/s41...

25.06.2025 22:38 โ€” ๐Ÿ‘ 206    ๐Ÿ” 52    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 4

Our new preprint ๐Ÿ‘€

09.06.2025 19:32 โ€” ๐Ÿ‘ 31    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Memory by a thousand rules: Automated discovery of functional multi-type plasticity rules reveals variety & degeneracy at the heart of learning Synaptic plasticity is the basis of learning and memory, but the link between synaptic changes and neural function remains elusive. Here, we used automated search algorithms to obtain thousands of str...

We just pushed โ€œMemory by a 1000 rulesโ€ onto bioRxiv, where we use clever #ML to find #plasticity quadruplets (EE, EI, IE, II) that learn basic stability in spiking nets. Why is it cool? We find 1000s!! of solutions, and they donโ€™t just stabilise. They #memorise! www.biorxiv.org/content/10.1...

02.06.2025 18:50 โ€” ๐Ÿ‘ 135    ๐Ÿ” 47    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 5
A wide shot of approximately 30 individuals standing in a line, posing for a group photograph outdoors. The background shows a clear blue sky, trees, and a distant cityscape or hills.

A wide shot of approximately 30 individuals standing in a line, posing for a group photograph outdoors. The background shows a clear blue sky, trees, and a distant cityscape or hills.

Great news! Our March SBI hackathon in Tรผbingen was a huge success, with 40+ participants (30 onsite!). Expect significant updates soon: awesome new features & a revamped documentation you'll love! Huge thanks to our amazing SBI community! Release details coming soon. ๐Ÿฅ ๐ŸŽ‰

12.05.2025 14:29 โ€” ๐Ÿ‘ 26    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Please RT๐Ÿ™

Reach out if you want to help understand cognition by modelling, analyzing and/or collect large scale intracortical data from ๐Ÿ‘ฉ๐Ÿ’๐Ÿ

We're a friendly, diverse group (n>25) w/ this terrace ๐Ÿ˜Ž in the center of Paris! See๐Ÿ‘‡ for + info about the lab

We have funding to support your application!

10.05.2025 14:23 โ€” ๐Ÿ‘ 39    ๐Ÿ” 21    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Jobs - mackelab The MackeLab is a research group at the Excellence Cluster Machine Learning at Tรผbingen University!

๐ŸŽ“Hiring now! ๐Ÿง  Join us at the exciting intersection of ML and Neuroscience! #AI4science
Weโ€™re looking for PhDs, Postdocs and Scientific Programmers that want to use deep learning to build, optimize and study mechanistic models of neural computations. Full details: www.mackelab.org/jobs/ 1/5

30.04.2025 13:43 โ€” ๐Ÿ‘ 23    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Re-posting is appreciated: We have a fully funded PhD position in CMC lab @cmc-lab.bsky.social (at @tudresden_de). You can use forms.gle/qiAv5NZ871kv... to send your application and find more information. Deadline is April 30. Find more about CMC lab: cmclab.org and email me if you have questions.

20.02.2025 14:50 โ€” ๐Ÿ‘ 77    ๐Ÿ” 89    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 8
Preview
Compositional simulation-based inference for time series Amortized simulation-based inference (SBI) methods train neural networks on simulated data to perform Bayesian inference. While this strategy avoids the need for tractable likelihoods, it often requir...

Excited to present our work on compositional SBI for time series at #ICLR2025 tomorrow!

If you're interested in simulation-based inference for time series, come chat with Manuel Gloeckler or Shoji Toyota

at Poster #420, Saturday 10:00โ€“12:00 in Hall 3.

๐Ÿ“ฐ: arxiv.org/abs/2411.02728

25.04.2025 08:53 โ€” ๐Ÿ‘ 25    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
ICLR 2025 Comparing noisy neural population dynamics using optimal transport distances OralICLR 2025

Excited to announce that our paper on "Comparing noisy neural population dynamics using optimal transport distances" has been selected for an oral presentation in #ICLR2025 (1.8% top papers). Check the thread for paper details (0/n).

Presentation info: iclr.cc/virtual/2025....

22.04.2025 18:06 โ€” ๐Ÿ‘ 23    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Happening tomorrow morning :).

28.03.2025 20:30 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0