Lasse ElsemΓΌller's Avatar

Lasse ElsemΓΌller

@elseml.bsky.social

πŸ’‘ PhD candidate @ Heidelberg University. 🌱 AI for science - simulation-based inference, robust machine learning & cognitive modeling.

1,391 Followers  |  469 Following  |  12 Posts  |  Joined: 16.11.2024  |  1.9074

Latest posts by elseml.bsky.social on Bluesky

I'm putting together a visualization workshop for PhD students πŸ§ͺπŸ“Š

Looking for examples of the good, the bad, and the ugly.

Do you have examples for a great (or awful) figure? Plots and overview/explainer figures are welcome.

Thanks 🧑

03.06.2025 05:39 β€” πŸ‘ 37    πŸ” 10    πŸ’¬ 11    πŸ“Œ 0
Introduction – Amortized Bayesian Cognitive Modeling

🧠 Check out the classic examples from Bayesian Cognitive Modeling: A Practical Course (Lee & Wagenmakers, 2013), translated into step-by-step tutorials with BayesFlow!

Interactive version: kucharssim.github.io/bayesflow-co...

PDF: osf.io/preprints/ps...

30.05.2025 14:28 β€” πŸ‘ 29    πŸ” 14    πŸ’¬ 0    πŸ“Œ 0
OSF

New preprint!

Individual differences in neurophysiological correlates of post-response adaptation: A model-based approach

osf.io/preprints/ps...

This work seeks to extract the effects of response monitoring on decision-making using model-based CogNeuro and methods to study individual differences.

06.03.2025 14:21 β€” πŸ‘ 6    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Congrats, really impressive work (especially providing the many additional resources on the website)!

06.03.2025 14:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Abstract
Introduction
A key step in the Bayesian workflow for model building is the graphical assessment of model predictions, whether these are drawn from the prior or posterior predictive distribution. The goal of these assessments is to identify whether the model is a reasonable (and ideally accurate) representation of the domain knowledge and/or observed data. There are many commonly used visual predictive checks which can be misleading if their implicit assumptions do not match the reality. Thus, there is a need for more guidance for selecting, interpreting, and diagnosing appropriate visualizations. As a visual predictive check itself can be viewed as a model fit to data, assessing when this model fails to represent the data is important for drawing well-informed conclusions.

Demonstration
We present recommendations for appropriate visual predictive checks for observations that are: continuous, discrete, or a mixture of the two. We also discuss diagnostics to aid in the selection of visual methods. Specifically, in the detection of an incorrect assumption of continuously-distributed data: identifying when data is likely to be discrete or contain discrete components, detecting and estimating possible bounds in data, and a diagnostic of the goodness-of-fit to data for density plots made through kernel density estimates.

Conclusion
We offer recommendations and diagnostic tools to mitigate ad-hoc decision-making in visual predictive checks. These contributions aim to improve the robustness and interpretability of Bayesian model criticism practices.

Abstract Introduction A key step in the Bayesian workflow for model building is the graphical assessment of model predictions, whether these are drawn from the prior or posterior predictive distribution. The goal of these assessments is to identify whether the model is a reasonable (and ideally accurate) representation of the domain knowledge and/or observed data. There are many commonly used visual predictive checks which can be misleading if their implicit assumptions do not match the reality. Thus, there is a need for more guidance for selecting, interpreting, and diagnosing appropriate visualizations. As a visual predictive check itself can be viewed as a model fit to data, assessing when this model fails to represent the data is important for drawing well-informed conclusions. Demonstration We present recommendations for appropriate visual predictive checks for observations that are: continuous, discrete, or a mixture of the two. We also discuss diagnostics to aid in the selection of visual methods. Specifically, in the detection of an incorrect assumption of continuously-distributed data: identifying when data is likely to be discrete or contain discrete components, detecting and estimating possible bounds in data, and a diagnostic of the goodness-of-fit to data for density plots made through kernel density estimates. Conclusion We offer recommendations and diagnostic tools to mitigate ad-hoc decision-making in visual predictive checks. These contributions aim to improve the robustness and interpretability of Bayesian model criticism practices.

New paper SΓ€ilynoja, Johnson, Martin, and Vehtari, "Recommendations for visual predictive checks in Bayesian workflow" teemusailynoja.github.io/visual-predi... (also arxiv.org/abs/2503.01509)

04.03.2025 13:15 β€” πŸ‘ 64    πŸ” 21    πŸ’¬ 5    πŸ“Œ 0
Post image

A study with 5M+ data points explores the link between cognitive parameters and socioeconomic outcomes: The stability of processing speed was the strongest predictor.

BayesFlow facilitated efficient inference for complex decision-making models, scaling Bayesian workflows to big data.

πŸ”—Paper

03.02.2025 12:21 β€” πŸ‘ 18    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

A reminder of our talk this Thursday (30th Jan), at 11am GMT. Paul BΓΌrkner (TU Dortmund University), will talk about "Amortized Mixture and Multilevel Models". Sign up at listserv.csv.warwick... to receive the link.

27.01.2025 09:04 β€” πŸ‘ 19    πŸ” 6    πŸ’¬ 0    πŸ“Œ 1

Scholar inbox is the best paper recommender and I cannot recommend it enough as a conference companion. I don’t know how people do poster sessions without it.

16.01.2025 21:39 β€” πŸ‘ 29    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

1️⃣ An agent-based model simulates a dynamic population of professional speed climbers.
2️⃣ BayesFlow handles amortized parameter estimation in the SBI setting.

πŸ“£ Shoutout to @masonyoungblood.bsky.social & @sampassmore.bsky.social

πŸ“„ Preprint: osf.io/preprints/ps...
πŸ’» Code: github.com/masonyoungbl...

10.12.2024 01:34 β€” πŸ‘ 41    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

The meme that never dies ✨

07.12.2024 10:03 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 06.12.2024 12:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Check out this project on modeling stationary and time-varying parameters with BayesFlow.

The family of methods is called "neural superstatistics", how can it not be cool!? 😎

πŸ‘¨β€πŸ’» Led by @schumacherlu.bsky.social

06.12.2024 12:25 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

I can definitely relate to looking up your own writing to figure out how you actually did things πŸ˜…

27.11.2024 09:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@dtfrazier.bsky.social

26.11.2024 08:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Stellar TL;DR of our recent work by our team! ✨

26.11.2024 08:47 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

To celebrate the new beginnings on Bluesky, let's reminisce about one of our highlights from the old days:

The unexpected shout-out by @fchollet.bsky.social that made everyone go crazy on the BayesFlow Slack server and led to a 15% increase in GitHub stars.

22.11.2024 22:37 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
GitHub - bayesflow-org/bayesflow at dev A Python library for amortized Bayesian workflows using generative neural networks. - GitHub - bayesflow-org/bayesflow at dev

The beta version of BayesFlow 2.0 is becoming more powerful and stable by the day. If you are curious about Amortized Bayesian Inference, give BayesFlow a try!
github.com/bayesflow-or...

22.11.2024 08:52 β€” πŸ‘ 118    πŸ” 25    πŸ’¬ 6    πŸ“Œ 1
Preview
The Statistical Accuracy of Neural Posterior and Likelihood Estimation Neural posterior estimation (NPE) and neural likelihood estimation (NLE) are machine learning approaches that provide accurate posterior, and likelihood, approximations in complex modeling scenarios, ...

Thrilled to contribute to this work led by David Frazier providing theory for NPE/NLE in simulation-based inference. These methods are known to match the accuracy of ABC and BSL with fewer simulations, this paper rigorously shows why this can be achieved.
arxiv.org/abs/2411.12068

21.11.2024 06:04 β€” πŸ‘ 53    πŸ” 11    πŸ’¬ 5    πŸ“Œ 2
Preview
Seminar on Advances in Probabilistic Machine Learning This seminar series aims to provide a platform for young researchers (PhD student or post-doc level) to give invited talks about their research, intending to have a diverse set of talks & speakers on ...

For those who don’t know yet, I am organising an online talk series together with Arno Solin on β€œAdvances in Probabilistic Machine Learning (APML)”.

It’s free for everyone to join and support early career researchers!

You can register and check out the schedule here: aaltoml.github.io/apml/

20.11.2024 20:33 β€” πŸ‘ 93    πŸ” 30    πŸ’¬ 2    πŸ“Œ 1

The first list filled up, so here's a second list of AI for Science researchers on bluesky.

Let me know if I missed you / if you'd like to join!

bsky.app/starter-pack...

20.11.2024 08:56 β€” πŸ‘ 71    πŸ” 29    πŸ’¬ 58    πŸ“Œ 0

I'm making a list of AI for Science researchers on bluesky β€” let me know if I missed you / if you'd like to join!

go.bsky.app/AcP9Lix

10.11.2024 00:11 β€” πŸ‘ 246    πŸ” 90    πŸ’¬ 160    πŸ“Œ 5
Post image Post image

✨ Super excited to share our paper **Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness** arxiv.org/abs/2408.05446 ✨

Inspired by biology we 1) get adversarial robustness + interpretability for free, 2) turn classifiers into generators & 3) design attacks on GPT-4

19.11.2024 18:03 β€” πŸ‘ 31    πŸ” 5    πŸ’¬ 2    πŸ“Œ 1

Bluesky now has over 20M people!! πŸŽ‰

We've been adding over a million users per day for the last few days. To celebrate, here are 20 fun facts about Bluesky:

19.11.2024 18:19 β€” πŸ‘ 132178    πŸ” 16360    πŸ’¬ 3143    πŸ“Œ 1416

πŸ™‹

19.11.2024 11:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Same here :)

19.11.2024 08:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ™‹β€β™‚οΈ Working on deep learning for taming complex cognitive models.

19.11.2024 08:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Would be great if you could add me!

19.11.2024 08:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Yann LeCun’s analogy of intelligence being a cake of self-supervised, supervised, and RL

Yann LeCun’s analogy of intelligence being a cake of self-supervised, supervised, and RL

Eight years later, Yann LeCun’s cake 🍰 analogy was spot on: self-supervised > supervised > RL

> β€œIf intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning (RL).”

17.11.2024 16:02 β€” πŸ‘ 94    πŸ” 12    πŸ’¬ 10    πŸ“Œ 2

Well, at least in the long run... πŸ˜„

17.11.2024 17:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The coolest starter pack out here! (in my totally unbiased opinion)

17.11.2024 15:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@elseml is following 20 prominent accounts