We will use APEBench to train, test and benchmark it in an advection scenario against a feedforward ConvNet.
arxiv.org/abs/2411.00180
@felix-m-koehler.bsky.social
π€ Machine Learning & π Simulation | πΊ YouTuber | π§βπ PhD student @ Thuerey Group
We will use APEBench to train, test and benchmark it in an advection scenario against a feedforward ConvNet.
arxiv.org/abs/2411.00180
Check Out my latest video on implementing an attention-based neural operator/emulator (i.e. a Transformer) in JAX:
youtu.be/GVVWpyvXq_s
Travelling to Singapore next week for #ICLR2025 presenting this paper (Sat 3 pm nr. 538): arxiv.org/abs/2502.19611
DM me (Whova, Email or bsky) if you want to chat about (autoregressive) neural emulators/operators for PDE, autodiff, differentiable physics, numerical solvers etc. π
Notebook: github.com/Ceyron/machi...
04.04.2025 14:36 β π 0 π 0 π¬ 0 π 0Check out my latest video on approximating the full Lyapunov spectrum for the Lorenz system: youtu.be/Enves8MDwms
Nice showcase of #JAX's features:
- `jax.lax.scan` for autoregressive rollout
- `jax.linearize` repeated jvp
- `jax.vmap`: automatic vectorization
Art.
28.03.2025 13:24 β π 0 π 0 π¬ 0 π 0Today, I had the chance to present my #NeurIPS paper "APEBench" @SimAI4Science . You can find the recording on YouTube: youtu.be/wie-SzD6AJE
18.02.2025 18:47 β π 5 π 1 π¬ 0 π 0To get started with APEBench install it via `pip install apebench` and check out the public documentation: tum-pbs.github.io/apebench/
12.02.2025 16:08 β π 0 π 0 π¬ 0 π 0Finally, there are so many cool experiments we did to find insights in neural emulators, to highlight limitations they inherit from the numerical simulator counterparts, etc. You find all the details in the paper: arxiv.org/pdf/2411.00180
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0And to enforce good practices APEBench is designed around controllable deterministic pseudo-randomness that allows for straightforward run of seed statistics that can be used to perform hypothesis tests.
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0Another important contribution is that APEBench defines most of its PDEs via a new parameterization that we call "difficulties". Those allow for expressing a wide range of different dynamics with a reduced and interpretable set of numbers.
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0This allows for investigating how unrolled training helps with long-term accuracy.
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0Temporal Axis also means various configurations of how emulator and simulator interact during training, for example, in terms of supervised unrolled training. We generalize many approaches seen in the literature in terms of unrolled steps T and branch steps B.
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0One core motivation for APEBench was the temporal axis in emulator learning (hence the "autoregressive" in APE). We focus on rollout metrics and sample rollouts to truly understand temporal generalization via long-term stability and accuracy in more than 20 metrics.
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0We, of course, also ship a wide range of popular emulator architectures, all of them implemented in JAX and designed agnostic to spatial dimension and boundary conditions. If you don't like APEBench (which I cannot imagine π), they are also available individually: github.com/Ceyron/pdequ...
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0The solver is also available as an individual package: Exponax: github.com/Ceyron/exponax
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0This numerical solver is based on Fourier-pseudo spectral ETDRK methods, one of the most efficient numerical techniques to solve semi-linear PDEs on periodic boundaries for which we provide a wide range of pre-defined configurations (46 as of the initial release).
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0With it, we can _procedurally_ generate all data ever needed in seconds on a modern GPU --- yes, this means you do not have to download hundreds of GBs of data. Installing the APEBench Python package (<1MB) is sufficient. π
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0The key innovation is to tightly integrate a classical numerical solver that produces all the synthetic training data with incredible efficiency and allows for easy scenario customization.
12.02.2025 16:08 β π 0 π 0 π¬ 1 π 0Thanks @munichcenterml.bsky.social for highlighting my recent #NeurIPS paper: APEBench,
a new benchmark suite for autoregressive emulators of PDEs to understand how we might solve the models of nature more efficiently. More details π§΅
Visual summary on project page: tum-pbs.github.io/apebench-pap...
Our online book on systems principles of LLM scaling is live at jax-ml.github.io/scaling-book/
We hope that it helps you make the most of your computing resources. Enjoy!
Iβd like to thank everyone contributing to our five accepted ICLR papers for the hard work! Great job everyone π Hereβs a quick list, stay tuned for details & code in the upcoming weeksβ¦
23.01.2025 03:14 β π 5 π 1 π¬ 0 π 0Scholar Inbox is amazing. Thanks for the great tool π
16.01.2025 18:48 β π 3 π 0 π¬ 0 π 0Amazing β€οΈ
Thanks for sharing and the kind words.
I created a video to help you get started using the APEBench suite (my recent #neurips paper) to benchmark autoregressive neural emulators for PDEs with a simple ConvNet emulation of 1D advection: youtu.be/q8fjQ4ZFynw
07.01.2025 18:42 β π 3 π 0 π¬ 0 π 0A screenshot of the YouTube channel homepage with "25K subscribers" highlighted in a red hand-drawn circle.
Happy new year! π Two days ago we entered 2025 and just in time the channel surpassed 25k subscribers. Wow! Thanks to everyone for their kind words and support along the way: www.youtube.com/channel/UCh0...
02.01.2025 12:55 β π 8 π 0 π¬ 0 π 1Check out my latest video on approximating the largest Lyapunov exponent of a dynamical system by integrating a tangent linear perturbation dynamic via autodiff in JAX: youtu.be/zRMBIkpcuu0
Very neat use-case of forward-mode AD for efficient Lyap approximation.
Really enjoyed our closing hockey game π
18.12.2024 09:37 β π 1 π 0 π¬ 0 π 0Automatic differentiation in forward mode computes derivatives by breaking down functions into elem operations and propagating derivatives alongside values. Itβs efficient for functions with fewer inputs than outputs and for Jacobian-vect prod, using for instance dual numbers.
13.12.2024 06:00 β π 37 π 10 π¬ 2 π 0Now presenting APEBench at #NeurIPS in West #5407.
12.12.2024 18:46 β π 1 π 0 π¬ 0 π 0