DurstewitzLab's Avatar

DurstewitzLab

@durstewitzlab.bsky.social

Scientific AI/ machine learning, dynamical systems (reconstruction), generative surrogate models of brains & behavior, applications in neuroscience & mental health

990 Followers  |  1,838 Following  |  38 Posts  |  Joined: 11.11.2024  |  2.4069

Latest posts by durstewitzlab.bsky.social on Bluesky

Preview
At 17, Hannah Cairo Solved a Major Math Mystery | Quanta Magazine After finding the homeschooling life confining, the teen petitioned her way into a graduate class at Berkeley, where she ended up disproving a 40-year-old conjecture.

What a fantastic accomplishment -- and what a fantastic story! www.quantamagazine.org/at-17-hannah...

03.08.2025 12:13 β€” πŸ‘ 299    πŸ” 88    πŸ’¬ 6    πŸ“Œ 6
Post image

Got prov. approval for 2 major grants in Neuro-AI & Dynamical Systems Reconstruction, on learning & inference in non-stationary environments, out-of-domain generalization, and DS foundation models. To all AI/math/DS enthusiasts: Expect job announcements (PhD/PostDoc) soon! Feel free to get in touch.

13.07.2025 06:23 β€” πŸ‘ 34    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
Preview
What Neuroscience Can Teach AI About Learning in Continuously Changing Environments Modern AI models, such as large language models, are usually trained once on a huge corpus of data, potentially fine-tuned for a specific task, and then deployed with fixed parameters. Their training ...

We wrote a little #NeuroAI piece about in-context learning & neural dynamics vs. continual learning & plasticity, both mechanisms to flexibly adapt to changing environments:
arxiv.org/abs/2507.02103
We relate this to non-stationary rule learning tasks with rapid performance jumps.

Feedback welcome!

06.07.2025 10:18 β€” πŸ‘ 36    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

Yes I think so!

04.07.2025 05:50 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
CNS*2025 Florence: NeuroXAI: Explainable AI for Understandi... View more about this event at CNS*2025 Florence

Happy to discuss our work on parsimonious & math. tractable RNNs for dynamical systems reconstruction next week at
cns2025florence.sched.com/event/1z9Mt/...

03.07.2025 12:40 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Abstract rule learning promotes cognitive flexibility in complex environments across species Nature Communications - Whether neurocomputational mechanisms that speed up human learning in changing environments also exist in other species remains unclear. Here, the authors show that both...

Inference in rule shifting tasks: rdcu.be/etlRV

28.06.2025 08:58 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Fantastic work by Florian BΓ€hner, Hazem Toutounji, Tzvetan Popov and many others - I'm just the person advertising!

26.06.2025 15:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Abstract rule learning promotes cognitive flexibility in complex environments across species Nature Communications - Whether neurocomputational mechanisms that speed up human learning in changing environments also exist in other species remains unclear. Here, the authors show that both...

How do animals learn new rules? By systematically testing diff. behavioral strategies, guided by selective attn. to rule-relevant cues: rdcu.be/etlRV
Akin to in-context learning in AI, strategy selection depends on the animals' "training set" (prior experience), with similar repr. in rats & humans.

26.06.2025 15:30 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

What a line up!! With Lorenzo Gaetano Amato, Demian Battaglia, @durstewitzlab.bsky.social, @engeltatiana.bsky.social,β€ͺ @seanfw.bsky.social‬, Matthieu Gilson, Maurizio Mattia, @leonardopollina.bsky.social‬, Sara Solla.

21.06.2025 10:24 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Into population dynamics? Coming to #CNS2025 but not quite ready to head home?

Come join us! at the Symposium on "Neural Population Dynamics and Latent Representations"! 🧠
πŸ“† July 10th
πŸ“ Scuola Superiore Sant’Anna, Pisa (and online)
πŸ‘‰ Free registration: neurobridge-tne.github.io
#compneuro

21.06.2025 10:24 β€” πŸ‘ 21    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1

I’m really looking so much forward to this! In wonderful Pisa!

21.06.2025 12:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Just heading back from a fantastic workshop on neural dynamics at Gatsby/ London, organized by Tatiana Engel, Bruno Averbeck, & Peter Latham.
Enjoyed seeing so many old friends, Memming Park, Carlos Brody, Wulfram Gerstner, Nicolas Brunel & many others …
Discussed our recent DS foundation models …

19.06.2025 11:37 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.

(6/6)

20.05.2025 14:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Remarkably, it not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.

(5/6)

20.05.2025 14:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (proceedings.neurips.cc/paper_files/...), specifically trained for DS reconstruction.
#AI

(4/6)

20.05.2025 14:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs.

This is surprising, cos DynaMix’ training corpus consists *solely* of simulated limit cycles & chaotic systems, no empirical data at all!

(3/6)

20.05.2025 14:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Unlike TS FMs, DynaMix exhibits #ZeroShotLearning of long-term stats of unseen DS, incl. attractor geometry & power spectrum, w/o *any* re-training, just from a context signal.

It does so with only 0.1% of the parameters of Chronos & 10x faster inference times than the closest competitor.

(2/6)

20.05.2025 14:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Can time series (TS) #FoundationModels (FM) like Chronos zero-shot generalize to unseen #DynamicalSystems (DS)?

No, they cannot!

But *DynaMix* can, the first TS/DS FM based on principles of DS reconstruction, capturing the long-term evolution of out-of-domain DS: arxiv.org/pdf/2505.131...
(1/6)

20.05.2025 14:15 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Home | Neuroscience | World Wide Theoretical Neuroscience Seminar WWTNS is a weekly digital seminar on Zoom targeting the theoretical neuroscience community. Its aim is to be a platform to exchange ideas among theoreticians.

I'm presenting our lab's work on *learning generative dynamical systems models from multi-modal and multi-subject data* in the world-wide theoretical neurosci seminar Wed 23rd, 11am ET:
www.wwtns.online

--> incl. recent work on building foundation models for #dynamical-systems reconstruction #AI πŸ§ͺ

21.04.2025 15:38 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
Preview
Uncertainty estimation with prediction-error circuits - Nature Communications How the brain integrates sensory input and predictions to adapt to change is not fully understood. Here authors build a neural network model to show how prediction-error neurons compute uncertainty of...

Nature Communications

Uncertainty estimation with prediction-error circuits

www.nature.com/articles/s41...

29.03.2025 11:57 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
Preview
Censor, purge, defund: how Trump following the authoritarian playbook on science and universities I have mapped 35 of the Trump administration's attacks on science and universities to the authoritarian playbook - and consider what it means for attacks still to come

My latest post is now out.

I show how Trump's attacks on science and universities are neither random nor new - they fit very precisely into the authoritarian playbook.

This means we can guess what might come next and prepare - and we must!

christinapagel.substack.com/p/censor-pur...

10.03.2025 14:58 β€” πŸ‘ 1604    πŸ” 933    πŸ’¬ 38    πŸ“Œ 62

Our revised #iclr2025 paper and codebase for an architecture for foundation models for dynamical systems reconstruction is now online: openreview.net/pdf?id=Vp2OA...

... includes additional examples of how this may be harvested for identifying drivers (control par.) of non-stationary processes.

09.02.2025 14:18 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Spearheaded by Manuel Brenner & Elias Weber, together with Georgia Koppe.
(4/4)

26.01.2025 11:28 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

This gives rise to an interpretable latent feature space, where datasets with similar dynamics cluster. Intriguingly, this clustering according to *dynamical systems features* led to much better separation of groups than could be achieved by more trad. time series features.
(3/4)

26.01.2025 11:28 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We show applications like transfer & few-shot learning, but most interestingly perhaps, subject/system-specific features were often linearly related to control parameters of the underlying dynamical system trained on …
(2/4)

26.01.2025 11:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Toward interpretable #AI foundation models for #DynamicalSystems reconstruction: Our paper on transfer & few-shot learning for dynamical systems just got accepted for #ICLR2025 !

Previous version: arxiv.org/pdf/2410.04814; strongly updated version will be available soon ...
(1/4)

26.01.2025 11:28 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
A photo of a teacher with a blackboard in the background. The essence: Teaching is one of the best means of learning.

A photo of a teacher with a blackboard in the background. The essence: Teaching is one of the best means of learning.

29.12.2024 14:58 β€” πŸ‘ 23    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Dynamical systems (DS) reconstruction (DSR)

25.12.2024 18:30 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

... for latent DSR models the idea is that they implicitly reconstruct missing dim. in their latent space, even from highly noisy observations.

25.12.2024 13:16 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes, orig. vers. of delay embed. theorems (Takens, Sauer) are for determ. sys., though appl. to noisy real data and there are more recent extensions to eg spike processes (pubs.aip.org/aip/cha/arti...) or stochastic systems (link.springer.com/article/10.1..., www.sciencedirect.com/science/arti...).

25.12.2024 13:16 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@durstewitzlab is following 20 prominent accounts