Machine Learning in Science's Avatar

Machine Learning in Science

@mackelab.bsky.social

We build probabilistic #MachineLearning and #AI Tools for scientific discovery, especially in Neuroscience. Probably not posted by @jakhmack.bsky.social. ๐Ÿ“ @ml4science.bsky.socialโ€ฌ, Tรผbingen, Germany

2,263 Followers  |  192 Following  |  50 Posts  |  Joined: 14.11.2024  |  2.5787

Latest posts by mackelab.bsky.social on Bluesky

Joint work of @vetterj.bsky.social, Manuel Gloeckler, @danielged.bsky.social, and @jakhmack.bsky.social

@ml4science.bsky.social, @tuebingen-ai.bsky.social, @unituebingen.bsky.social

23.07.2025 14:27 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models Simulation-based inference (SBI) offers a flexible and general approach to performing Bayesian inference: In SBI, a neural network is trained on synthetic data simulated from a model and used to rapid...

๐Ÿ“„ Check out the full paper for methods, experiments and more cool stuff: arxiv.org/abs/2504.17660
๐Ÿ’ป Code is available: github.com/mackelab/npe...

23.07.2025 14:27 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ’ก Takeaway:
By leveraging foundation models like TabPFN, we can make SBI training-free, simulation-efficient, and easy to use.
This work is another step toward user-friendly Bayesian inference for a broader science and engineering community.

23.07.2025 14:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Results on the pyloric simulator.
(a) Voltage traces from the experimental measurement (top) and a posterior predictive simulated using the posterior mean from TSNPE-PF as the parameter (bottom). 
(b) Average distance (energy scoring rule) to observation and percentage of valid simulation from posterior samples; compared to experimental results obtained in Glaser et al.

Results on the pyloric simulator. (a) Voltage traces from the experimental measurement (top) and a posterior predictive simulated using the posterior mean from TSNPE-PF as the parameter (bottom). (b) Average distance (energy scoring rule) to observation and percentage of valid simulation from posterior samples; compared to experimental results obtained in Glaser et al.

But does it scale to complex real-world problems? We tested it on two challenging Hodgkin-Huxley-type models:
๐Ÿง  single-compartment neuron
๐Ÿฆ€ 31-parameter crab pyloric network

NPE-PF delivers tight posteriors & accurate predictions with far fewer simulations than previous methods.

23.07.2025 14:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
SBI benchmark results for amortized and sequential NPE-PF. 
C2ST for NPE, NLE, and NPE-PF across ten reference posteriors (lower is better); dots indicate averages and bars show 95% confidence intervals over five independent runs.

SBI benchmark results for amortized and sequential NPE-PF. C2ST for NPE, NLE, and NPE-PF across ten reference posteriors (lower is better); dots indicate averages and bars show 95% confidence intervals over five independent runs.

What you get with NPE-PF:
๐Ÿšซ No need to train inference nets or tune hyperparameters.
๐ŸŒŸ Competitive or superior performance vs. standard SBI methods.
๐Ÿš€ Especially strong performance for smaller simulation budgets.
๐Ÿ”„ Filtering to handle large datasets + support for sequential inference.

23.07.2025 14:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Illustration of NPE and NPE-PF: Both approaches use simulations sampled from the prior and simulator. In (standard) NPE, a neural density estimator is trained to obtain the posterior. In NPE-PF, the posterior is evaluated by autoregressively passing the simulation dataset and observations to TabPFN.

Illustration of NPE and NPE-PF: Both approaches use simulations sampled from the prior and simulator. In (standard) NPE, a neural density estimator is trained to obtain the posterior. In NPE-PF, the posterior is evaluated by autoregressively passing the simulation dataset and observations to TabPFN.

The key idea:
TabPFN, originally trained for tabular regression and classification, can estimate posteriors by autoregressively modeling one parameter dimension after the other.
Itโ€™s remarkably effective, even though TabPFN was not designed for SBI.

23.07.2025 14:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

SBI usually relies on training neural nets on simulated data to approximate posteriors. But:
โš ๏ธ Simulators can be expensive
โš ๏ธ Training & tuning neural nets can be tedious
Our method NPE-PF repurposes TabPFN as an in-context density estimator for training-free, simulation-efficient Bayesian inference.

23.07.2025 14:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

New preprint: SBI with foundation models!
Tired of training or tuning your inference network, or waiting for your simulations to finish? Our method NPE-PF can help: It provides training-free simulation-based inference, achieving competitive performance with orders of magnitude fewer simulations! โšก๏ธ

23.07.2025 14:27 โ€” ๐Ÿ‘ 22    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
Null and Noteworthy: Neurons tracking sequences donโ€™t fire in order Instead, neurons encode the position of sequential items in working memory based on when they fire during ongoing brain wave oscillationsโ€”a finding that challenges a long-standing theory.

The neurons that encode sequential information into working memory do not fire in that same order during recall, a finding that is at odds with a long-standing theory. Read more in this monthโ€™s Null and Noteworthy.

By @ldattaro.bsky.social

#neuroskyence

www.thetransmitter.org/null-and-not...

30.06.2025 16:08 โ€” ๐Ÿ‘ 42    ๐Ÿ” 19    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Many people in our lab use Scholar Inbox regularly -- highly recommended!

02.07.2025 07:05 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This work was enabled and funded by an innovation project of @ml4science.bsky.social

11.06.2025 18:17 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Congrats to @gmoss13.bsky.social, @coschroeder.bsky.social, @jakhmack.bsky.social , together with our great collaborators Vjeran Viลกnjeviฤ‡, Olaf Eisen, @oraschewski.bsky.socialโ€ฌ, Reinhard Drews. Thank you to @tuebingen-ai.bsky.social and @awi.de for marking this work possible.

11.06.2025 11:47 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

If youโ€™re interested to learn more, check out the paper and code, or get in touch with @gmoss13.bsky.social

Code: github.com/mackelab/sbi...
Paper: www.cambridge.org/core/journal...

11.06.2025 11:47 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We obtain posterior distributions over ice accumulation and melting rates for Ekstrรถm Ice Shelf over the past hundreds of years. This allows us to make quantitative statements about the history of the atmospheric and oceanic conditions.

11.06.2025 11:47 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Thanks to great data collection efforts from @geophys-tuebingen.bsky.social and @awi.de, we can apply this approach to Ekstrรถm Ice Shelf, Antarctica.

11.06.2025 11:47 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We develop a simulation-based-inference workflow for inferring the accumulation and melting rates from measurements of the internal layers.

11.06.2025 11:47 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Radar measurements have long been used for measuring the internal layer structure of Antarctic ice shelves. This structure contains information about the history of the ice shelf. This includes the past rate of snow accumulation at the surface, as well as ice melting at the base.

11.06.2025 11:47 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Simulation-based inference of surface accumulation and basal melt rates of an Antarctic ice shelf from isochronal layers | Journal of Glaciology | Cambridge Core Simulation-based inference of surface accumulation and basal melt rates of an Antarctic ice shelf from isochronal layers - Volume 71

Thrilled to share that our paper on using simulation-based inference for inferring ice accumulation and melting rates for Antarctic ice shelves is now published in Journal of Glaciology!

www.cambridge.org/core/journal...

11.06.2025 11:47 โ€” ๐Ÿ‘ 16    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
A wide shot of approximately 30 individuals standing in a line, posing for a group photograph outdoors. The background shows a clear blue sky, trees, and a distant cityscape or hills.

A wide shot of approximately 30 individuals standing in a line, posing for a group photograph outdoors. The background shows a clear blue sky, trees, and a distant cityscape or hills.

Great news! Our March SBI hackathon in Tรผbingen was a huge success, with 40+ participants (30 onsite!). Expect significant updates soon: awesome new features & a revamped documentation you'll love! Huge thanks to our amazing SBI community! Release details coming soon. ๐Ÿฅ ๐ŸŽ‰

12.05.2025 14:29 โ€” ๐Ÿ‘ 25    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Ready to apply? Email mls-jobs@inf.uni-tuebingen.de with your CV, publication list, transcripts, research statement (2 pages max), two referee contacts, and code samples/repository link. Apply by May 31, 2025! We look forward to welcoming you to our team at @ml4science.bsky.social ! 5/5

30.04.2025 13:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We are open to candidates who are more interested in #neuroscience questions, as well as to ones more interested in #machinelearning #AI aspects (e.g. training large-scale mechanistic neural networks, learning efficient emulators, automated model discovery for mechanistic models, โ€ฆ) 4/5

30.04.2025 13:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics Biophysical neuron models provide insights into cellular mechanisms underlying neural computations. However, a central challenge has been the question of how to identify the parameters of detailed bio...

๐Ÿ‘€ In a second project, funded by the DFG through the CRC Robust Vision, we want to use differentiable simulators of biophysical models to build data-driven models of visual processing in the retina. www.biorxiv.org/content/10.1... 3/5

30.04.2025 13:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Fly-brain connectome helps to make predictions about neural activity A simulation that uses machine learning predicts neural-circuit function in the fly brain from the connectivity between neurons.

๐Ÿชฐ In a first project, funded by the ERC Grant DeepCoMechTome, we want to make use of connectomics data to build large-scale simulations of the fly brain that can explain visually driven behaviorโ€”see, e.g., our prior work with Srinivas Turagaโ€™s group www.nature.com/articles/d41586-024-02935-z 2/5

30.04.2025 13:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Jobs - mackelab The MackeLab is a research group at the Excellence Cluster Machine Learning at Tรผbingen University!

๐ŸŽ“Hiring now! ๐Ÿง  Join us at the exciting intersection of ML and Neuroscience! #AI4science
Weโ€™re looking for PhDs, Postdocs and Scientific Programmers that want to use deep learning to build, optimize and study mechanistic models of neural computations. Full details: www.mackelab.org/jobs/ 1/5

30.04.2025 13:43 โ€” ๐Ÿ‘ 23    ๐Ÿ” 13    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
GitHub - mackelab/markovsbi: Public repository for paper: Compositional simulation-based inference for time series. Public repository for paper: Compositional simulation-based inference for time series. - mackelab/markovsbi

Interested? More resources available here:

Code: github.com/mackelab/mar...
Paper: arxiv.org/pdf/2411.02728

25.04.2025 08:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

In FNSE, we only have to solve a smaller and easier inverse problem; it does scale relatively easily to high-dimensional simulators.

We validate this on a high-dimensional Kolmogorov flow simulator with around one million data dimensions.

25.04.2025 08:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We apply this approach to various SBI methods (e.g. FNLE/FNRE), focusing on FNSE.

Compared to NPE with embedding nets, itโ€™s more simulation-efficient and accurate across time series of varying lengths.

25.04.2025 08:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We propose an SBI approach that can exploit Markovian simulators by locally identifying parameters consistent with individual state transitions.

We then compose these local results to obtain a posterior over parameters that align with the entire time series observation.

25.04.2025 08:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Compositional simulation-based inference for time series Amortized simulation-based inference (SBI) methods train neural networks on simulated data to perform Bayesian inference. While this strategy avoids the need for tractable likelihoods, it often requir...

Excited to present our work on compositional SBI for time series at #ICLR2025 tomorrow!

If you're interested in simulation-based inference for time series, come chat with Manuel Gloeckler or Shoji Toyota

at Poster #420, Saturday 10:00โ€“12:00 in Hall 3.

๐Ÿ“ฐ: arxiv.org/abs/2411.02728

25.04.2025 08:53 โ€” ๐Ÿ‘ 25    ๐Ÿ” 4    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Thanks so much for the shout-out, and congrats on your exciting work!! ๐ŸŽ‰ ๐Ÿ™‚

Also, a good reminder to share that our work is now out in Cell Reports ๐Ÿ™๐ŸŽŠ

โฌ‡๏ธ

www.cell.com/cell-reports...

17.04.2025 20:50 โ€” ๐Ÿ‘ 36    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@mackelab is following 20 prominent accounts