William Gilpin's Avatar

William Gilpin

@wgilpin.bsky.social

asst prof at UT Austin physics interested in chaos, fluids, & biophysics. https://www.wgilpin.com/

653 Followers  |  411 Following  |  43 Posts  |  Joined: 08.12.2023
Posts Following

Posts by William Gilpin (@wgilpin.bsky.social)

Delighted this paper is out! Soft solids fracture in complex ways. Can we control it using structure and activity? Yes, using defects that localize energy injection for targeted failure! Amazing work combining exp, theory & ML by Sheng Chen and collab with Murrell lab (Yale).

27.02.2026 19:19 β€” πŸ‘ 12    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - williamgilpin/icicl: In-context learning of transfer operators for dynamical systems In-context learning of transfer operators for dynamical systems - williamgilpin/icicl

GitHub: github.com/williamgilpi...
Preprint: arxiv.org/abs/2602.18679
Joint work with Anthony Bao and Jeff Lai

26.02.2026 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The work was inspired by recent works on small transformers, which isolate surprising behaviors like in-context k-nearest neighbors, and even in-context learning of small MLPs. See Garg et a 2022l: arxiv.org/abs/2208.01066 and Reddy 2024: arxiv.org/abs/2312.03002 (10/N)

26.02.2026 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You can think of our setting as a minimized version of SciML foundation models (FM) do. For example, PDE FMs are trained at one Reynolds number, but can forecast different Reynolds numbers. Likewise, physiology FM zero-shot new subjects, who are likely distinct dynamical systems

26.02.2026 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The agreement improves as out-of-distribution loss drops. We repeated these experiments across a hundred different models trained on different systems, using the dysts ODE library. (8/N)

26.02.2026 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The results agree, suggesting that small transformers learn transfer operators from the context during testing. They especially capture longer-lived modes, like the invariant distribution and long-lived metastable states (leading eigenvectors) (7/N)

26.02.2026 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

So transformers infer when tokens arise from a higher-dimensional attractor. How does this enable OOD forecasts? We sample transitions between pairs of k-grams (time-delay embedded inputs), and compare to transition probabilities on the original, full state space. (6/N)

26.02.2026 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How is this possible? We extract the conditional probabilities of the small transformer and find that it time delay embeds its univariate input. When test data comes from a higher dimensional system, attention rollouts become higher-rank, i.e. an adaptive delay embedding. (5/N)

26.02.2026 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We can also see this in the loss curves. We see epoch-wise double descent for our in-distribution test data (a different trajectory from the same ODE), but we see a second double descent for out-of-distribution data from an unseen ODE. (4/N)

26.02.2026 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

in our new paper we shrink this phenomenon down: we fully-train a small Chronos-like transformer to forecast exactly one dynamical system, and then test its ability to forecast a second dynamical system. Even in this restricted setting, it works much better than expected. (3/N)

26.02.2026 15:52 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Last year, we noticed that off-the-shelf time series foundation models, which never saw ODE during training, forecast chaotic systems surprisingly well, even without fine-tuning. (2/N)

26.02.2026 15:52 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

How do time series foundation models forecast unseen dynamical systems? In new experiments, we find that small transformers learn to approximate transfer operators in-context. (1/N)
arxiv.org/abs/2602.18679

26.02.2026 15:52 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
2026 Cottrell Scholar Awards Research Corporation for Science Advancement, America's first foundation dedicatedΒ wholly toΒ science, has namedΒ 24Β early career scholars in chemistry, physics, and astronomy as recipients of its 2026Β ...

RCSA welcomes 24 early career teacher-scholars in chemistry, physics, and astronomy as recipients of its 2026 #CottrellScholar Awards. Each awardee receives $120,000. Congratulations to this exceptional class!

12.02.2026 16:02 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 3    πŸ“Œ 7

Congratulations!

27.12.2025 10:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I’m very happy to announce that I will joining #Seoul National University’s #SNU #physics department as Assistant Professor in Fall of 2026!

27.12.2025 10:02 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0
Post image

🚨 POSTDOC OPENING 🚨
NIH-funded Bio-Fluid Mechanics Postdoc in my lab @univmiami.bsky.social
Hofstenia miamia | cilia-driven flows | behavior & neuroscience
Collab w/ Mansi Srivastava @harvard.edu
πŸ•’ Start: Jan–Feb 2026
⏳ 1 yr, renewable | Email me ASAP!
#Postdoc #Biophysics #FluidDynamics

23.12.2025 18:31 β€” πŸ‘ 30    πŸ” 27    πŸ’¬ 0    πŸ“Œ 1
Preview
Innovation-exnovation dynamics on trees and trusses Innovation and its complement exnovation describe the progression of realized possibilities from the past to the future, and the process depends on the structure of the underlying graph. For example, ...

I'm excited to say that one of the most exploratory and thought-provoking papers I've worked on in recent years was just accepted at Physical Review Research.

Preprint here: arxiv.org/abs/2502.21072

#physics #innovation πŸ§ͺπŸ¦‹ @apsphysics.bsky.social

08.07.2025 20:06 β€” πŸ‘ 21    πŸ” 3    πŸ’¬ 3    πŸ“Œ 1
Preview
Three College of Natural Sciences Faculty Win NSF CAREER Awards 3 UT faculty in computer science and physics won an NSF award recognizing their potential to serve as academic role models.

Kudos to Edoardo Baldini, William Gilpin & Daehyeok Kim on earning Faculty Early Career Development Program (CAREER) Awards from the National Science Foundation!

#NSF #CAREERAwards #EarlyCareerDevelopment #TexasScience @wgilpin.bsky.social @utphysics.bsky.social
cns.utexas.edu/news/accolad...

21.08.2025 16:46 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

hi david, thank you very much :)

17.06.2025 02:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Optimization hardness constrains ecological transients Author summary Distinct species can serve overlapping functions in complex ecosystems. For example, multiple cyanobacteria species within a microbial mat might serve to fix nitrogen. Here, we show mat...

Paper: doi.org/10.1371/jour...

Code: github.com/williamgilpi...

Explanatory Website & Code demo: williamgilpin.github.io/illotka/demo...

16.06.2025 17:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

This work was inspired by amazing recent work on transients by the dynamical systems community: Analogue KSAT solvers, slowdowns in gradient descent during neural network training, and chimera states in coupled oscillators. (12/N)

16.06.2025 17:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

For the Lotka-Volterra case, optimal coordinates are the right singular vectors of the species interaction matrix. You can experimentally estimate these with O(N) operations using Krylov-style methods: perturb the ecosystem, and see how it reacts. (11/N)

16.06.2025 17:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This variation influences how we reduce the dimensionality of biological time series. With non-reciprocal interactions (like predator prey), PCA won’t always separate timescales. The optimal dimensionality-reducing variables (β€œecomodes”) should precondition the linear problem (10/N)

16.06.2025 17:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

As a consequence of ill-conditioning, large ecosystems become excitable: small changes cause huge differences in how they approach equilibrium. Using the FLI, a metric invented by astrophysicists to study planetary orbits, we see caustics indicating variation in solve path (9/N)

16.06.2025 17:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How would hard optimization problems arise in nature? I used genetic algorithms to evolve ecosystems towards supporting more biodiversity, and they became more ill-conditionedβ€”and thus more prone to supertransients. (8/N)

16.06.2025 17:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

So ill-conditioning isn’t just something numerical analysts care about. It’s a physical property that measures computational complexity, which translates to super long equilibration times in large biological networks with trophic overlap (7/N)

16.06.2025 17:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

More precisely: the expected equilibration time of a random Lotka-Volterra system scales with the condition number of the species interaction matrix. The scaling matches the expected scaling of the solvers that your computer uses to do linear regression (6/N)

16.06.2025 17:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We can think of ecological dynamics as an analogue constraint satisfaction problem. As the problem becomes more ill-conditioned, the ODEs describing the system take longer to β€œsolve” the problem of who survives and who goes extinct (5/N)

16.06.2025 17:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

But is equilibrium even relevant? In high dimensions, stable fixed points might not be reachable in finite time. Supertransients due to unstable solutions that trap dynamics for increasingly long durations. E.g, pipe turbulence is supertransient (laminar flow is globally stable) (4/N)

16.06.2025 17:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Dynamical systems are linear near fixed points, so May used random matrix theory to show large random ecosystems are usually unstable. The biodiversity we see in the real world requires finer-tuned structure from selection, niches, et al. that recover stability (3/N)

16.06.2025 17:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0