Adam J. Eisen's Avatar

Adam J. Eisen

@adamjeisen.bsky.social

Computational Neuroscientist + ML Researcher | Control theory + deep learning to understand the brain | PhD Candidate @ MIT | (he) 🍁

91 Followers  |  76 Following  |  17 Posts  |  Joined: 01.07.2025  |  1.8901

Latest posts by adamjeisen.bsky.social on Bluesky

This paper truly changed my life!!

01.12.2025 16:44 β€” πŸ‘ 40    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0

We put out this preprint a couple months ago, but I really wanted to replicate our findings before we went to publication.

At first, what we found was very confusing!

But when we dug in, it revealed a fascinating neural strategy for how we switch between tasks

doi.org/10.1101/2024.09.29.615736

🧡

27.07.2025 21:31 β€” πŸ‘ 89    πŸ” 27    πŸ’¬ 2    πŸ“Œ 2

Really proud of this project with @adamjeisen.bsky.social
- Jacobian estimation is a challenging and generic problem in dynamics, and I’m excited for all the future use cases of our method! See you at NeurIPS πŸ§ πŸ’»

28.11.2025 17:42 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Worth noting that even in the low-dim periodic VDP, the LC loss does seem to help a little bit (the NeuralODE that also sees noisy data doesn't do quite as well). So it seems to be helpful to enforce VF conservativity along directions orthogonal to the flow

28.11.2025 05:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Great catch also on Fig 4G,H. My guess is this reflects a tradeoff: pred loss learns the VF along the flow (larger Lyaps), while LC loss enforces VF conservativity along all dirs. LC thus sacrifices some prediction for consistency, which leads to dampened large Lyaps but more acc. overall spectrum

28.11.2025 05:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Many thanks for your questions David! Re periodic orbit: great point. In practice (for non-chaotic systems like VDP or RNN) noise in the observed data (or added noise) "thickens" the trajectory. This implies the local relaxation dynamics, allowing us to learn off-attractor structure

28.11.2025 05:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks for your interest! please do, I’d be happy to chat

28.11.2025 04:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
I'm Going To Have To Science

General Bonkers makes weird music so you don't have to. New album. With tracks featuring very special guest star Dave Freedman. On the major platforms including
Spotify: open.spotify.com/album/44QQ4x...
Bandcamp:
generalbonkers.bandcamp.com/album/im-goi...

27.11.2025 14:14 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Characterizing control between interacting subsystems with deep... Biological function arises through the dynamical interactions of multiple subsystems, including those between brain areas, within gene regulatory networks, and more. A common approach to...

13/ πŸ˜€Feel free to reach out to discuss this work, or the application of it to your field of study. Or come swing by our poster at #NeurIPS2025. We’d love to chat!

πŸ“„ Paper: openreview.net/forum?id=I82...
πŸ’Ύ Code: github.com/adamjeisen/J...
πŸ“ Poster: Thu 4 Dec 11am - 2pm PST (#2111)

26.11.2025 19:32 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0

12/πŸ™πŸ»Thanks for following along. And a HUGE thanks to @neurostrow.bsky.social @sarthakc.bsky.social @leokoz8.bsky.social and my advisors @earlkmiller.bsky.social + Ila Fiete for being fantastic collaborators on this project!

26.11.2025 19:32 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

11/πŸš€This work opens the door to many questions. We're now equipped to ask: what are the control laws governing how we control our attention? And how do these interactions break down in psychiatric conditions?

26.11.2025 19:32 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

10/βš™οΈLoop Closure πŸ”
We also ensure integrals around closed loops are zero. This forces the model to learn the tangent space dynamics over the whole data manifold, not just along the flow. Why? To control a system, you have to know what happens off its normal path.

26.11.2025 19:32 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

9/βš™οΈTime-series Prediction ⛰️
The path integral of Jacobians depends on endpoints, not the path. Think of a mountain peak: your elevation is the same regardless of the trail taken. We parameterize the Jacobian with a deep network, and use this insight for time-series prediction.

26.11.2025 19:32 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

8/⚑Controlling neural dynamics
We also used our framework to actively control the network based purely on observed data. By stimulating the sensory area in a targeted way, we precisely manipulated the RNN's behavior and forced it to make a specific incorrect choice.

26.11.2025 19:32 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

7/πŸŽ›οΈ Control between areas
We applied our framework to a simplified model of interacting brain areas: a multi-area recurrent neural network (RNN) trained on a working memory task. After learning the task, its "sensory" area gained control over its "cognitive" area.

26.11.2025 19:32 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

6/🎯 In rigorous tests, JacobianODEs accurately predicted dynamics and outperformed NeuralODEs on Jacobian estimation, even in noisy, high-dimensional chaotic systems. Accurate control starts with accurate Jacobians, so this was an important check.

Now what can we do with it?πŸ‘‡

26.11.2025 19:32 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

5/πŸ”Ž Estimating the Jacobian from data is difficult. To do so, we developed JacobianODE, a deep learning framework that leverages geometric properties of the Jacobian to infer it from data.

Scroll down the thread to learn how it works. For now, does it work?

26.11.2025 19:32 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

4/πŸ’«Our method centers on the Jacobian, a mathematical object that provides a moment-to-moment snapshot of how a change in one subsystem affects another. This view of control from the local tangent space allows us to capture rich, context-dependent control dynamics.

26.11.2025 19:32 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/πŸŽ›οΈControl theory offers a powerful lens to understand these interactions. It describes how inputs can steer a system towards a desired goal. We present a new framework based on control theory that characterizes complex, nonlinear control directly from data.

26.11.2025 19:32 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/ Complex systems, including 🧠 brains, 🌲 ecosystems, and 🧬 gene networks, are made of interacting parts. In the brain, different areas coordinate how they interact in different contexts. This is how our attention shifts between our senses, thoughts, and experiences.πŸ–ΌοΈπŸŽ§πŸ’­

26.11.2025 19:32 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How do brain areas control each other? πŸ§ πŸŽ›οΈ

✨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.πŸ§΅β¬‡οΈ

26.11.2025 19:32 β€” πŸ‘ 85    πŸ” 29    πŸ’¬ 1    πŸ“Œ 3
Post image

Our next paper on comparing dynamical systems (with special interest to artificial and biological neural networks) is out!! Joint work with @annhuang42.bsky.social , as well as @satpreetsingh.bsky.social , @leokoz8.bsky.social , Ila Fiete, and @kanakarajanphd.bsky.social : arxiv.org/pdf/2510.25943

10.11.2025 16:16 β€” πŸ‘ 67    πŸ” 23    πŸ’¬ 4    πŸ“Œ 3

@adamjeisen is following 20 prominent accounts