Tomorrow Christoph will present DynaMix, the first foundation model for dynamical systems reconstruction, at #NeurIPS2025 Exhibit Hall C,D,E #2303
05.12.2025 13:28 β π 9 π 3 π¬ 0 π 0@durstewitzlab.bsky.social
Scientific AI/ machine learning, dynamical systems (reconstruction), generative surrogate models of brains & behavior, applications in neuroscience & mental health
Tomorrow Christoph will present DynaMix, the first foundation model for dynamical systems reconstruction, at #NeurIPS2025 Exhibit Hall C,D,E #2303
05.12.2025 13:28 β π 9 π 3 π¬ 0 π 0Thanks for sharing! Missed it, but just downloaded it, looking forward to get into it ...
01.12.2025 14:17 β π 3 π 0 π¬ 1 π 0Unlike current AI systems, animals can quickly and flexibly adapt to changing environments.
This is the topic of our new perspective in Nature MI (rdcu.be/eSeif), where we relate dynamical and plasticity mechanisms in the brain to in-context and continual learning in AI. #NeuroAI
Revised version of our #NeurIPS2025 paper with full code base in Julia & Python now online, see arxiv.org/abs/2505.13192
28.10.2025 18:27 β π 25 π 7 π¬ 0 π 0Despite being extremely lightweight (only 0.1% of params, 0.6% training corpus size, of closest competitor), it also outperforms major TS foundation models like Chronos variants on real-world TS forecasting with minimal inference times (0.2%) ...
21.09.2025 09:40 β π 2 π 0 π¬ 0 π 0Our #AI #DynamicalSystems #FoundationModel DynaMix was accepted to #NeurIPS2025 with outstanding reviews (6555) β first model which can *zero-shot*, w/o any fine-tuning, forecast the *long-term statistics* of time series provided a context. Test it on #HuggingFace:
huggingface.co/spaces/Durst...
Relevant publications:
www.nature.com/articles/s41...
openreview.net/pdf?id=Vp2OA...
proceedings.mlr.press/v235/brenner...
www.nature.com/articles/s41...
We have openings for several fully-funded positions (PhD & PostDoc) at the intersection of AI/ML, dynamical systems, and neuroscience within a BMFTR-funded Neuro-AI consortium, at Heidelberg University & Central Institute of Mental Health:
www.einzigartigwir.de/en/job-offer...
More info below ...
Is it possible to go from spikes to rates without averaging?
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
Today I joined >1900 members of US National Academies of Science, Engineering & Medicine signing this open letter (views our own).
Leadership of science by US has been paramount for >70yrs & Admin is now acting to throw it all away!
docs.google.com/document/d/1...
www.nytimes.com/2025/03/31/s...
What a fantastic accomplishment -- and what a fantastic story! www.quantamagazine.org/at-17-hannah...
03.08.2025 12:13 β π 315 π 92 π¬ 6 π 7Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.
13.07.2025 06:23 β π 34 π 8 π¬ 0 π 0We wrote a little #NeuroAI piece about in-context learning & neural dynamics vs. continual learning & plasticity, both mechanisms to flexibly adapt to changing environments:
arxiv.org/abs/2507.02103
We relate this to non-stationary rule learning tasks with rapid performance jumps.
Feedback welcome!
Yes I think so!
04.07.2025 05:50 β π 2 π 0 π¬ 0 π 0Happy to discuss our work on parsimonious & math. tractable RNNs for dynamical systems reconstruction next week at
cns2025florence.sched.com/event/1z9Mt/...
Inference in rule shifting tasks: rdcu.be/etlRV
28.06.2025 08:58 β π 4 π 0 π¬ 0 π 0Fantastic work by Florian BΓ€hner, Hazem Toutounji, Tzvetan Popov and many others - I'm just the person advertising!
26.06.2025 15:30 β π 0 π 0 π¬ 0 π 0How do animals learn new rules? By systematically testing diff. behavioral strategies, guided by selective attn. to rule-relevant cues: rdcu.be/etlRV
Akin to in-context learning in AI, strategy selection depends on the animals' "training set" (prior experience), with similar repr. in rats & humans.
What a line up!! With Lorenzo Gaetano Amato, Demian Battaglia, @durstewitzlab.bsky.social, @engeltatiana.bsky.social,βͺ @seanfw.bsky.socialβ¬, Matthieu Gilson, Maurizio Mattia, @leonardopollina.bsky.socialβ¬, Sara Solla.
21.06.2025 10:24 β π 5 π 2 π¬ 1 π 0Into population dynamics? Coming to #CNS2025 but not quite ready to head home?
Come join us! at the Symposium on "Neural Population Dynamics and Latent Representations"! π§
π July 10th
π Scuola Superiore SantβAnna, Pisa (and online)
π Free registration: neurobridge-tne.github.io
#compneuro
Iβm really looking so much forward to this! In wonderful Pisa!
21.06.2025 12:18 β π 1 π 0 π¬ 0 π 0Just heading back from a fantastic workshop on neural dynamics at Gatsby/ London, organized by Tatiana Engel, Bruno Averbeck, & Peter Latham.
Enjoyed seeing so many old friends, Memming Park, Carlos Brody, Wulfram Gerstner, Nicolas Brunel & many others β¦
Discussed our recent DS foundation models β¦
We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.
(6/6)
Remarkably, it not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.
(5/6)
And no, itβs neither based on Transformers nor Mamba β itβs a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (proceedings.neurips.cc/paper_files/...), specifically trained for DS reconstruction.
#AI
(4/6)
It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs.
This is surprising, cos DynaMixβ training corpus consists *solely* of simulated limit cycles & chaotic systems, no empirical data at all!
(3/6)
Unlike TS FMs, DynaMix exhibits #ZeroShotLearning of long-term stats of unseen DS, incl. attractor geometry & power spectrum, w/o *any* re-training, just from a context signal.
It does so with only 0.1% of the parameters of Chronos & 10x faster inference times than the closest competitor.
(2/6)
Can time series (TS) #FoundationModels (FM) like Chronos zero-shot generalize to unseen #DynamicalSystems (DS)?
No, they cannot!
But *DynaMix* can, the first TS/DS FM based on principles of DS reconstruction, capturing the long-term evolution of out-of-domain DS: arxiv.org/pdf/2505.131...
(1/6)
I'm presenting our lab's work on *learning generative dynamical systems models from multi-modal and multi-subject data* in the world-wide theoretical neurosci seminar Wed 23rd, 11am ET:
www.wwtns.online
--> incl. recent work on building foundation models for #dynamical-systems reconstruction #AI π§ͺ
Nature Communications
Uncertainty estimation with prediction-error circuits
www.nature.com/articles/s41...