Joint work of @vetterj.bsky.social, Manuel Gloeckler, @danielged.bsky.social, and @jakhmack.bsky.social
@ml4science.bsky.social, @tuebingen-ai.bsky.social, @unituebingen.bsky.social
@mackelab.bsky.social
We build probabilistic #MachineLearning and #AI Tools for scientific discovery, especially in Neuroscience. Probably not posted by @jakhmack.bsky.social. ๐ @ml4science.bsky.socialโฌ, Tรผbingen, Germany
Joint work of @vetterj.bsky.social, Manuel Gloeckler, @danielged.bsky.social, and @jakhmack.bsky.social
@ml4science.bsky.social, @tuebingen-ai.bsky.social, @unituebingen.bsky.social
๐ Check out the full paper for methods, experiments and more cool stuff: arxiv.org/abs/2504.17660
๐ป Code is available: github.com/mackelab/npe...
๐ก Takeaway:
By leveraging foundation models like TabPFN, we can make SBI training-free, simulation-efficient, and easy to use.
This work is another step toward user-friendly Bayesian inference for a broader science and engineering community.
Results on the pyloric simulator. (a) Voltage traces from the experimental measurement (top) and a posterior predictive simulated using the posterior mean from TSNPE-PF as the parameter (bottom). (b) Average distance (energy scoring rule) to observation and percentage of valid simulation from posterior samples; compared to experimental results obtained in Glaser et al.
But does it scale to complex real-world problems? We tested it on two challenging Hodgkin-Huxley-type models:
๐ง single-compartment neuron
๐ฆ 31-parameter crab pyloric network
NPE-PF delivers tight posteriors & accurate predictions with far fewer simulations than previous methods.
SBI benchmark results for amortized and sequential NPE-PF. C2ST for NPE, NLE, and NPE-PF across ten reference posteriors (lower is better); dots indicate averages and bars show 95% confidence intervals over five independent runs.
What you get with NPE-PF:
๐ซ No need to train inference nets or tune hyperparameters.
๐ Competitive or superior performance vs. standard SBI methods.
๐ Especially strong performance for smaller simulation budgets.
๐ Filtering to handle large datasets + support for sequential inference.
Illustration of NPE and NPE-PF: Both approaches use simulations sampled from the prior and simulator. In (standard) NPE, a neural density estimator is trained to obtain the posterior. In NPE-PF, the posterior is evaluated by autoregressively passing the simulation dataset and observations to TabPFN.
The key idea:
TabPFN, originally trained for tabular regression and classification, can estimate posteriors by autoregressively modeling one parameter dimension after the other.
Itโs remarkably effective, even though TabPFN was not designed for SBI.
SBI usually relies on training neural nets on simulated data to approximate posteriors. But:
โ ๏ธ Simulators can be expensive
โ ๏ธ Training & tuning neural nets can be tedious
Our method NPE-PF repurposes TabPFN as an in-context density estimator for training-free, simulation-efficient Bayesian inference.
New preprint: SBI with foundation models!
Tired of training or tuning your inference network, or waiting for your simulations to finish? Our method NPE-PF can help: It provides training-free simulation-based inference, achieving competitive performance with orders of magnitude fewer simulations! โก๏ธ
The neurons that encode sequential information into working memory do not fire in that same order during recall, a finding that is at odds with a long-standing theory. Read more in this monthโs Null and Noteworthy.
By @ldattaro.bsky.social
#neuroskyence
www.thetransmitter.org/null-and-not...
Many people in our lab use Scholar Inbox regularly -- highly recommended!
02.07.2025 07:05 โ ๐ 7 ๐ 0 ๐ฌ 0 ๐ 0This work was enabled and funded by an innovation project of @ml4science.bsky.social
11.06.2025 18:17 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0Congrats to @gmoss13.bsky.social, @coschroeder.bsky.social, @jakhmack.bsky.social , together with our great collaborators Vjeran Viลกnjeviฤ, Olaf Eisen, @oraschewski.bsky.socialโฌ, Reinhard Drews. Thank you to @tuebingen-ai.bsky.social and @awi.de for marking this work possible.
11.06.2025 11:47 โ ๐ 2 ๐ 1 ๐ฌ 1 ๐ 0If youโre interested to learn more, check out the paper and code, or get in touch with @gmoss13.bsky.social
Code: github.com/mackelab/sbi...
Paper: www.cambridge.org/core/journal...
We obtain posterior distributions over ice accumulation and melting rates for Ekstrรถm Ice Shelf over the past hundreds of years. This allows us to make quantitative statements about the history of the atmospheric and oceanic conditions.
11.06.2025 11:47 โ ๐ 2 ๐ 1 ๐ฌ 1 ๐ 0Thanks to great data collection efforts from @geophys-tuebingen.bsky.social and @awi.de, we can apply this approach to Ekstrรถm Ice Shelf, Antarctica.
11.06.2025 11:47 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0We develop a simulation-based-inference workflow for inferring the accumulation and melting rates from measurements of the internal layers.
11.06.2025 11:47 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Radar measurements have long been used for measuring the internal layer structure of Antarctic ice shelves. This structure contains information about the history of the ice shelf. This includes the past rate of snow accumulation at the surface, as well as ice melting at the base.
11.06.2025 11:47 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Thrilled to share that our paper on using simulation-based inference for inferring ice accumulation and melting rates for Antarctic ice shelves is now published in Journal of Glaciology!
www.cambridge.org/core/journal...
A wide shot of approximately 30 individuals standing in a line, posing for a group photograph outdoors. The background shows a clear blue sky, trees, and a distant cityscape or hills.
Great news! Our March SBI hackathon in Tรผbingen was a huge success, with 40+ participants (30 onsite!). Expect significant updates soon: awesome new features & a revamped documentation you'll love! Huge thanks to our amazing SBI community! Release details coming soon. ๐ฅ ๐
12.05.2025 14:29 โ ๐ 25 ๐ 7 ๐ฌ 0 ๐ 1Ready to apply? Email mls-jobs@inf.uni-tuebingen.de with your CV, publication list, transcripts, research statement (2 pages max), two referee contacts, and code samples/repository link. Apply by May 31, 2025! We look forward to welcoming you to our team at @ml4science.bsky.social ! 5/5
30.04.2025 13:43 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0We are open to candidates who are more interested in #neuroscience questions, as well as to ones more interested in #machinelearning #AI aspects (e.g. training large-scale mechanistic neural networks, learning efficient emulators, automated model discovery for mechanistic models, โฆ) 4/5
30.04.2025 13:43 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0๐ In a second project, funded by the DFG through the CRC Robust Vision, we want to use differentiable simulators of biophysical models to build data-driven models of visual processing in the retina. www.biorxiv.org/content/10.1... 3/5
30.04.2025 13:43 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0๐ชฐ In a first project, funded by the ERC Grant DeepCoMechTome, we want to make use of connectomics data to build large-scale simulations of the fly brain that can explain visually driven behaviorโsee, e.g., our prior work with Srinivas Turagaโs group www.nature.com/articles/d41586-024-02935-z 2/5
30.04.2025 13:43 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0๐Hiring now! ๐ง Join us at the exciting intersection of ML and Neuroscience! #AI4science
Weโre looking for PhDs, Postdocs and Scientific Programmers that want to use deep learning to build, optimize and study mechanistic models of neural computations. Full details: www.mackelab.org/jobs/ 1/5
Interested? More resources available here:
Code: github.com/mackelab/mar...
Paper: arxiv.org/pdf/2411.02728
In FNSE, we only have to solve a smaller and easier inverse problem; it does scale relatively easily to high-dimensional simulators.
We validate this on a high-dimensional Kolmogorov flow simulator with around one million data dimensions.
We apply this approach to various SBI methods (e.g. FNLE/FNRE), focusing on FNSE.
Compared to NPE with embedding nets, itโs more simulation-efficient and accurate across time series of varying lengths.
We propose an SBI approach that can exploit Markovian simulators by locally identifying parameters consistent with individual state transitions.
We then compose these local results to obtain a posterior over parameters that align with the entire time series observation.
Excited to present our work on compositional SBI for time series at #ICLR2025 tomorrow!
If you're interested in simulation-based inference for time series, come chat with Manuel Gloeckler or Shoji Toyota
at Poster #420, Saturday 10:00โ12:00 in Hall 3.
๐ฐ: arxiv.org/abs/2411.02728
Thanks so much for the shout-out, and congrats on your exciting work!! ๐ ๐
Also, a good reminder to share that our work is now out in Cell Reports ๐๐
โฌ๏ธ
www.cell.com/cell-reports...