Luigi Acerbi's Avatar

Luigi Acerbi

@lacerbi.bsky.social

Assoc. Prof. of Machine & Human Intelligence | Univ. Helsinki & Finnish Centre for AI (FCAI) | Bayesian ML & probabilistic modeling | https://lacerbi.github.io/

2,239 Followers  |  206 Following  |  208 Posts  |  Joined: 17.11.2023
Posts Following

Posts by Luigi Acerbi (@lacerbi.bsky.social)

Yes, this still happens! Before doing a manual check, I deploy agents to do the first check for me (literally a /doublecheck skill in Claude Code which I force to call automatically at the end of each task). This often catches issues, including once a totally fabricated MCMC analysis...

11.02.2026 08:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Look who's there!

09.02.2026 11:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

With (in pseudo-random order) @mummitrollet.bsky.social @nasrullohloka.bsky.social @huangdaolang.bsky.social @conorhassan.bsky.social @sfrancesco.bsky.social @arnosolin.bsky.social @samikaski.bsky.social and several others not on here -- check their names above!

06.02.2026 11:52 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

More info soon; all papers fit our AI4science research agenda within the Finnish Center for Artificial Intelligence & @ellisinstitute.fi -- building efficient methods for inference, uncertainty quantification and decision making, leveraging powerful autoregressive transformers and diffusion models.

06.02.2026 11:52 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

A bit of a delayed celebration, but happy that our three submitted papers were accepted at @iclr-conf.bsky.social 2026! This was a... complicated year for ICLR, but hopefully now we can focus on the science.

06.02.2026 11:52 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

This game was 100% designed, made, and tested by Claude Code with one prompt to "make a complete Sierra-style adventure game with EGA-like graphics and text parser, with 10-15 minutes of gameplay." I gave two prompts to play test the game & deploy it.

Play: enchanted-lighthouse-game.netlify.app

27.01.2026 03:37 β€” πŸ‘ 91    πŸ” 9    πŸ’¬ 5    πŸ“Œ 0
Preview
GitHub - acerbilab/svbmc: Stacking Variational Bayesian Monte Carlo (S-VBMC) algorithm for combining Variational Bayesian Monte Carlo (VBMC) posteriors to boost inference performance. Stacking Variational Bayesian Monte Carlo (S-VBMC) algorithm for combining Variational Bayesian Monte Carlo (VBMC) posteriors to boost inference performance. - acerbilab/svbmc

7/ Work by @sfrancesco.bsky.social, @chengkunli.bsky.social & myself, with many thanks to the Research Council of Finland.

The S-VBMC code is available as an easy-to-use Python library: github.com/acerbilab/sv...

Check out the paper: openreview.net/forum?id=M2i...

14.01.2026 14:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Post image

6/ S-VBMC is an inexpensive post-processing step, so it greatly improves posterior quality at a negligible computational cost!

14.01.2026 14:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

5/ This optimization is made possible by VBMC’s handy property of providing a closed-form solution for individual components of the ELBO (I_m,k), allowing the following formulation for M independent VBMC solutions:

14.01.2026 14:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

4/ S-VBMC β€œstacks” the Gaussian mixtures posteriors output by independent VBMC runs by maximizing the β€œstacked” ELBO with respect to the weights of the individual Gaussian components. It doesn’t change the components, it just re-weights them!

14.01.2026 14:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
VBMC vs. S-VBMC

VBMC vs. S-VBMC

3/ However, VBMC’s relatively conservative active learning strategy can lead it to miss some portions of the true posterior when this has challenging properties (multiple modes, long tails). S-VBMC fixes this!

14.01.2026 14:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - acerbilab/pyvbmc: PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python - acerbilab/pyvbmc

2/ Bayesian inference of model parameters can be a complex problem to solve, especially with expensive likelihood functions. We addressed this in the past with Variational Bayesian Monte Carlo (VBMC repo: github.com/acerbilab/py...).

14.01.2026 14:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Stacking Variational Bayesian Monte Carlo paper in TMLR.

Stacking Variational Bayesian Monte Carlo paper in TMLR.

1/ Excited to share our new work published in Transactions on Machine Learning Research (TMLR), Stacking Variational Bayesian Monte Carlo (S-VBMC)!

14.01.2026 14:31 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1

I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity

28.12.2025 06:23 β€” πŸ‘ 115    πŸ” 28    πŸ’¬ 4    πŸ“Œ 6
Preview
Diffusion Models in Simulation-Based Inference: A Tutorial Review Diffusion models have recently emerged as powerful learners for simulation-based inference (SBI), enabling fast and accurate estimation of latent parameters from simulated and real data. Their score-b...

What an amazing Yule gift from @stefanradev.bsky.social & colleagues: a tour-de-force tutorial on diffusion models for simulator-based inference.

This is one of the most comprehensive and useful review/tutorials I have ever seen -- a must read! Kudos to all the authors!

arxiv.org/abs/2512.20685

26.12.2025 11:16 β€” πŸ‘ 35    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

EurIPS was pretty fun! Thanks to @lacerbi.bsky.social , @conorhassan.bsky.social , and all organizers of the Amortized ProbML team for the great workshop and for inviting me as a speaker! πŸ™Œ Also great to be able to present our spotlight paper on sparsity on the main stage 😍

06.12.2025 23:40 β€” πŸ‘ 18    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

Tobias Niehues @tobnie.bsky.social is presenting our work with @dominikstrb.bsky.social 'Amortized Bayesian decision-making for inferring decision-making parameters from behavior' at the Amortized ProbML Workshop and the @ellis.eu UnConference. Please come by our poster!

04.12.2025 11:06 β€” πŸ‘ 18    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
ALINE: Joint Amortization for Bayesian Inference and Active Data Acquisition ALINE: Joint Amortization for Bayesian Inference and Active Data Acquisition

4/ If you are at NeurIPS, go talk to @huangdaolang.bsky.social at the poster session today!

Joint work with Daolang, @wenxinyi.bsky.social , @ayushbharti.bsky.social & @samikaski.bsky.social, FCAI & @ellisinstitute.fi

Website: huangdaolang.com/aline/

03.12.2025 17:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

3/ Information gain would normally be hard to estimate, but this real synergy makes training work like a charm and surprisingly well.

Since we like to make everything flexible, you can also pick & choose your information gain targets (data or subsets of parameters) at runtime.

03.12.2025 17:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/ Why *both* inference and design? Not just because we *can*, but because we *must* (h/t @paulbuerkner.com for this catchphrase).

We warm-start the network first by training the inference head via MLE and use that to compute the information gain required to train the policy head.

03.12.2025 17:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1/ Very happy with this spotlight paper at
@neuripsconf.bsky.social where we continue our "Amortize Everything" agenda.

After the Amortized Conditioning Engine (Chang et al., AISTATS, 2025) which amortizes all sorts of inference tasks, here with ALINE we amortize both inference & design.

03.12.2025 17:21 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
FNOPE: Simulation-based inference on function spaces with Fourier... Simulation-based inference (SBI) is an established approach for performing Bayesian inference on scientific simulators. SBI so far works best on low-dimensional parametric models. However, it is...

I’m super excited to present our new work in #Eurips2025 and #Neurips2025! We developed FNOPE: a new simulation-based inference (SBI) method which excels at inferring function-valued parameters!

Paper: openreview.net/forum?id=yB5...
Code: github.com/mackelab/fnope
(1/9)

01.12.2025 08:34 β€” πŸ‘ 20    πŸ” 5    πŸ’¬ 1    πŸ“Œ 2

Are there non-predatory for-profit publishers? This "thing" clearly never went through peer review despite what they say.

Well, maybe it did go through "peer review": LLM reviewers reviewed an LLM-written paper, fair enough.

28.11.2025 20:29 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Updated website: sites.google.com/view/amortiz...

Co-organized with Cen-You (Scott) Li, @conorhassan.bsky.social, @desirivanova.bsky.social as well as @samikaski.bsky.social & FCAI

27.11.2025 12:03 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Amortized ProbML workshop page snapshot

Amortized ProbML workshop page snapshot

If you are at ELLIS UnConference on Dec 2 (pre
@euripsconf.bsky.social ), join us at our Amortized ProbML workshop!

Come for our great speakers: @paulbuerkner.com Yingzhen Li @davidruegamer.bsky.social @liza-semenova.bsky.social & poster session; stay for our hot takes on amortized probML!

27.11.2025 12:03 β€” πŸ‘ 18    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0

I was thinking of sharing them publicly on my blog but then realized it could be problematic / misunderstood / misused, so for now I refrained from it... but yes I can send them privately. I'll email you later.

12.11.2025 09:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We regularly use LLMs to review our own papers before submission. I agree that in principle it's hard to "prove" that a review is entirely LLM-generated vs. LLM-written... but in practice it's quite easy especially when you see multiple "LLM-smelling" points in a review.

12.11.2025 09:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

AISTATS adopted the iron fist against "LLM-generated" reviews and I fully agree with it.

Hopefully we can figure out a way to allow responsible, "LLM-assisted" reviews where one genuinely uses the LLM to assist (not replace) the reviewer, which can be a force multiplier.

12.11.2025 09:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Even just the fact that we have oral tests (throughout school and then university) shocks a lot of people.

It's easy to fall into the fallacy "I went through this system so it is obviously the best system" and surely it has downsides but tbh it does prepare you for a lot of things in life...

03.11.2025 10:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers? Object binding, the brain's ability to bind the many features that collectively represent an object into a coherent whole, is central to human cognition. It groups low-level perceptual features into h...

Very interesting work on emerging object-binding representations in vision transformers by @kordinglab.bsky.social arxiv.org/abs/2510.24709

This might seem an oddly specific property but the good ol' binding problem reflects a fundamental primitive of cognition, epistemology, you name it.

03.11.2025 10:21 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0