Aki Vehtari's Avatar

Aki Vehtari

@avehtari.bsky.social

Academy Professor in computational Bayesian modeling at Aalto University, Finland. Bayesian Data Analysis 3rd ed, Regression and Other Stories, and Active Statistics co-author. #mcmc_stan and #arviz developer. Web page https://users.aalto.fi/~ave/

6,373 Followers  |  255 Following  |  278 Posts  |  Joined: 06.02.2024  |  1.8591

Latest posts by avehtari.bsky.social on Bluesky

Post image

The streets were already too well cleared and sprinkled with gravel yesterday after lunch, so that I had to walk back home, but today the views are even prettier with sunshine

13.02.2026 10:46 β€” πŸ‘ 17    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Photo of snowy scenery

Photo of snowy scenery

Finally enough snow that I could ski from home to campus

12.02.2026 11:43 β€” πŸ‘ 39    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
When are Bayesian model probabilities overconfident? Bayesian model comparison is often based on the posterior distribution over the set of compared models. This distribution is often observed to concentrate on a single model even when other measures of...

See also arxiv.org/abs/2003.04026

12.02.2026 09:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
In this same journal, Arnold Zellner published a seminal paper on Bayes' theorem as an optimal information processing rule. This result led to the variational formulation of Bayes' theorem, which is the central idea in generalized variational inference. Almost 40 years later, we revisit these ideas, but from the perspective of information deletion. We investigate rules which update a posterior distribution into an antedata distribution when a portion of data is removed. In such context, a rule which does not destroy or create information is called the optimal information deletion rule and we prove that it coincides with the traditional use of Bayes' theorem.

In this same journal, Arnold Zellner published a seminal paper on Bayes' theorem as an optimal information processing rule. This result led to the variational formulation of Bayes' theorem, which is the central idea in generalized variational inference. Almost 40 years later, we revisit these ideas, but from the perspective of information deletion. We investigate rules which update a posterior distribution into an antedata distribution when a portion of data is removed. In such context, a rule which does not destroy or create information is called the optimal information deletion rule and we prove that it coincides with the traditional use of Bayes' theorem.

arXivπŸ“ˆπŸ€–
Optimal information deletion and Bayes' theorem
By Montcho, Rue

11.02.2026 16:26 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I reported this, and they have now fixed it. They told they had tested the survey form with several people before publishing it, but I guess no-one else was trying to find the middle of the scale

11.02.2026 10:34 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Response scale with options 0 - Not at all likely, 1, 2, 4, 5, 6, 7, 8, 9, 10 Extremely likely

Response scale with options 0 - Not at all likely, 1, 2, 4, 5, 6, 7, 8, 9, 10 Extremely likely

I felt confused, and it took me a few seconds to realize why

11.02.2026 09:41 β€” πŸ‘ 28    πŸ” 1    πŸ’¬ 5    πŸ“Œ 0
Tietotekniikan lehtoreita (kolme tehtÀvÀÀ) | Aalto-yliopisto ·       Bioinformatiikka, biostatistiikka ja terveysdatatiede

Etsimme tietotekniikan lehtoreita Aalto-yliopistoon kokoaikaisiin, toistaiseksi voimassa oleviin tehtÀviin! 🌟
Alueet:
βœ… Bioinformatiikka, biostatistiikka ja terveysdatatiede
βœ… Tietotekniikka: ohjelmointi, algoritmit ja teoria
βœ… Ohjelmistotuotanto
πŸ“Hae 16.3.2026 mennessΓ€. LisΓ€tiedot ja ohjeet πŸ‘‡

10.02.2026 11:13 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Posterior-SBC now also with peer-review stamp in Statistics and Computing doi.org/10.1007/s112... (update your bib files)

09.02.2026 17:00 β€” πŸ‘ 24    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Post image

Compositional data (proportions that sum to 1) behave in ways standard models aren’t built for

I walk through why Dirichlet regression is often the right tool & what extra insight it gives using a real ex of eyetracking

#Dirichlet #r #brms #guide #eyetracking

open.substack.com/pub/mzlotean...

09.02.2026 16:05 β€” πŸ‘ 22    πŸ” 11    πŸ’¬ 2    πŸ“Œ 0
Post image

Palkkaamme suomea osaavia lehtoreita: www.aalto.fi/fi/avoimet-t...

09.02.2026 09:25 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

The building blocks exist, but for a ready made example on how to do it, it's best to ask the authors of that paper

08.02.2026 21:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Efficient Uncertainty Propagation in Bayesian Two-Step Procedures Bayesian inference provides a principled framework for probabilistic reasoning. If inference is performed in two steps, uncertainty propagation plays a crucial role in accounting for all sources of un...

The second part computation can be made faster using importance sampling arxiv.org/abs/2505.10510

08.02.2026 09:50 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

This is known as cut posterior. Stan developers have discussed possibility of adding support for cut in Stan, but it hasn't been prioritised as it can often be achieved just running inference for first part, and then run parallely the second part with many datasets (as in multiple imputation)

08.02.2026 09:50 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Species distribution modeling with expert elicitation and Bayesian calibration Species distribution models (SDM) are key tools in ecology, conservation, and natural resources management. They are traditionally trained with data on direct species observations. However, if collec....

Ever wondered how to use supra-Bayesian approach in species distribution modeling (SMDs). Well we present a framework for it and exemplify it with real data on fish reproduction area estimation.

nsojournals.onlinelibrary.wiley.com/doi/10.1002/...
1/3

06.02.2026 14:44 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

In CmdStanR, you just pass the pathfinder object to sample init argument, no need to convert

06.02.2026 08:16 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - VisruthSK/stanflow: R Package for a Mildly Opinionated Stan Bayesian Workflow R Package for a Mildly Opinionated Stan Bayesian Workflow - VisruthSK/stanflow

First release of stanflow! v0.1.0 was a few days later than I'd like, but its up now.

Stanflow is a metapackage a la tidyverse for a Stan Bayesian workflow--see the README for more details/features!

Feedback/issues/PRs are always appreciated.

#rstats #bayes

05.02.2026 16:18 β€” πŸ‘ 29    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1
Preview
Divergent transitions in Hilbert Space Gaussian process posteriors and how to avoid them | Nikolas Siccha | Generable How to train your favorite basis function based approximation to Gaussian Processes.

Nikolas Siccha's blog post "Divergent transitions in Hilbert Space Gaussian process posteriors and how to avoid them" shows some cool results he started working on when at Aalto www.generable.com/post/hsgp-re...

05.02.2026 17:09 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

With more than 14k lines of code!

05.02.2026 09:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Bayesian Workflow Bayesian statistics and statistical practice have evolved over the years, driven by advancements in theory, methods, and computational tools. This book explores the intricate workflows of applied Baye...

The publisher estimates the Bayesian Workflow book will ship in June www.routledge.com/Bayesian-Wor...

05.02.2026 09:12 β€” πŸ‘ 89    πŸ” 8    πŸ’¬ 2    πŸ“Œ 0

I'm watching this year's lectures to see how @rmcelreath.bsky.social, one of the soon to be published Bayesian Workflow book co-author, teaches workflow. So far looking good!

04.02.2026 19:21 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
StanCon 20206
International Conferene on Bayesian Inference and Probabilistic Programming
17-21 August, 2026
Uppsala, Sweden

StanCon 20206 International Conferene on Bayesian Inference and Probabilistic Programming 17-21 August, 2026 Uppsala, Sweden

Three weeks time to submit contributed talk abstract to StanCon 2026! You can also submit a poster abstract early, if you need to make early travel plans. There will be travel and accommodation support for students, too!

More information about submitting at www.stancon2026.org/abstracts/

04.02.2026 15:55 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Watching @rmcelreath.bsky.social's both A and B lectures feels like a watching a movie which jumps between two different time periods

04.02.2026 19:15 β€” πŸ‘ 24    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I also want to hear about your favorite recent (<5 years πŸ˜…) Stan and Stan-adjacent developments that changed your bayesian framework (thinking about models, validating models, fitting models and checking their fit and how they break)!

04.02.2026 13:35 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

as I'm revising my course materials, I keep stumbling upon cool @mc-stan.org developments.

Current favorites:
1. your model has funnels and you exhausted reparametrization ideas: metric = "dense_e" makes your HMC learn about covariance btw parameters. Sloooow, but effective!
1/

04.02.2026 13:30 β€” πŸ‘ 32    πŸ” 15    πŸ’¬ 1    πŸ“Œ 2
Post image

New pre-pre-print:

03.02.2026 19:29 β€” πŸ‘ 12    πŸ” 8    πŸ’¬ 2    πŸ“Œ 0
Statistical Rethinking 2026 Lecture B04 - Group-level confounding and intro to social networks
YouTube video by Richard McElreath Statistical Rethinking 2026 Lecture B04 - Group-level confounding and intro to social networks

This time I try to explain group-level confounding and some ways to deal with it. Lecture B04 of Statistical Rethinking 2026 - fixed effects, Mundlak machines, latent Mundlak machines, intro to social network analysis and the social relations model. Full lecture list: github.com/rmcelreath/s...

30.01.2026 13:27 β€” πŸ‘ 102    πŸ” 16    πŸ’¬ 6    πŸ“Œ 4
Chapter 2 The Pizza Experiment | Advanced Cognitive Modeling Notes My notes for the advanced cognitive modeling course - 2026

wanna learn about cognitive modeling and love pizza? I am revising my cognitive modeling course notes and could use some feedback (or you might just enjoy learning about cog modeling in R and Stan): fusaroli.github.io/AdvancedCogn...

(only chapter 1-3 are in place, the others will come)

30.01.2026 09:08 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 0    πŸ“Œ 2
Post image

StanCon 2026 is this August 17-21 in Upsala, Sweden πŸ‡ΈπŸ‡ͺ

www.stancon2026.org

⏰ Abstracts for contributed talks are due Feb 25

⏰ Abstracts for posters are due May 27

And just to be clear: Yes, StanCon is my favorite conference to attend!! Can't wait for this one!

29.01.2026 21:55 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
Preview
Does the Use of Crowdsourced Listeners Yield Different Speech Intelligibility Results Than In-Person Listeners for Typically Developing Children? Purpose: We examined the performance of crowdsourced listeners compared with in-person listeners on the measurement of speech intelligibility for...

if you want to know how to compute marginal means from a mixed effects model in brms using rvars, then have I got a supplemental materials document for you pubs.asha.org/doi/10.1044/...

29.01.2026 22:10 β€” πŸ‘ 24    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

Another absolute banger from @natehaines.bsky.social: generative hierarchical Bayesian modeling* is the best way to investigate individual differences!

*This isn't "just" making use of Bayesian estimation, but one of the best examples I've seen of a full Bayesian embrace of uncertainty.

29.01.2026 08:07 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

@avehtari is following 19 prominent accounts