Just finished delivering a course on 'Robust and scalable simulation-based inference (SBI)' at Greek Stochastics. This covered an introduction to SBI, open challenges, and some recent contributions from my own group.
The slides are now available here: fxbriol.github.io/pdfs/slides-....
Very excited about this 👇
Stay tuned for the call for papers and more info!! ☄️
A true francophone!
Two months after my PhD defense on RL in POMDP, I finally uploaded the final version of my thesis :)
You can find it here: hdl.handle.net/2268/328700 (manuscript and slides).
Many thanks to my advisors and to the jury members.
📝 Our paper "A Theoretical Justification for Asymmetric Actor-Critic Algorithms" was accepted at #ICML!
Never heard of "asymmetric actor-critic" algorithms? Yet, many successful #RL applications use them (see image).
But these algorithms are not fully understood. Below, we provide some insights.
Positions remain open! Both PhD and postdoctoral opportunities are available on scientific foundation models. An additional position is also available on AI for regional climate models (jointly with @xavierfettweis.bsky.social). Do not hesitate to apply!
Génial! Est ce que ce sera enregistré?
Huge thanks to my co-authors 🙏
Juan L. Gamella, Ozan Sener, Jens Behrmann, Guillermo Sapiro, Jörn Jacobsen, Marco Cuturi.
We hope RoPE helps reframe model misspecification as a learning problem that requires real-world data to be solved.
This is just the beginning.
We hope RoPE pushes SBI toward:
✅ Embracing real-world constraints
✅ Blending domain knowledge + data
✅ Treating robust inference as a learning problem whose objective must be tight with the downstream application for the result of this inference
Here’s what RoPE does:
1️⃣ Uses a small calibration set of real (x, θ) pairs
2️⃣ Learns a correction from simulated to real obs using optimal transport
3️⃣ Enables simulation-based inference you can actually trust
That’s what motivated RoPE, our method being presented at ICML 2025 🎉
RoPE reframes misspecification as a posterior inaccuracy problem, not a simulator/data mismatch in contrast to how model misspecification is often defined in the literature.
Humans design robust statistics by intuition.
Neural SBI doesn’t — unless we teach it how.
🔑 Insight: To make SBI robust, show it real-world data.
And use labeled data to trust the newly learnt inference pipeline.
Why? Because inference method and simulator misspecification are deeply entangled.
🧠 Neural SBI often overfits to quirks in simulators.
🤔 Simpler methods (like ABC with handcrafted stats) often perform better when simulators are slightly wrong.
SBI thrives in ideal settings — but what happens when simulators aren’t perfect?
Real-world practitioners always ask:
“But what if the simulator is off?”
I used to think this was an issue related to the simulator and not to SBI. Now I believe this is the central issue with existing SBI algorithms.
📣 New paper alert — To be presented at ICML 2025!
arxiv.org/abs/2405.08719
What does it really mean for a simulator to be misspecified, if our goal is to estimate parameters with calibrated uncertainty?
A 🧵on our new method, RoPE, and what it means for real-world SBI ⬇️
📢 I am looking for AI post-doc/research/engineer positions in Europe (Paris, London, Zurich, ...) starting 2026. My work revolves around generative modeling and AI for Science, with 4+ publications at top conferences during my PhD. If you are hiring, please reach out! If not, please repost 🔁
I can't recommend @francois-rozet.bsky.social enough👇He is both this excellent researcher and coder that any ML team would dream having onboard CC: @danilojrezende.bsky.social @awehwe.bsky.social @bkmi.bsky.social @yann-lecun.bsky.social @johannbrehmer.bsky.social
But if my lab can help, please DM me.
Andry, Rozet, Lewin, Rochman, Mangeleer, Pirlet, Faulx, Gr\'egoire, Louppe: Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation https://arxiv.org/abs/2504.18720 https://arxiv.org/pdf/2504.18720 https://arxiv.org/html/2504.18720
This is from The Tonight Show with Johnny Carson aired on May 20th, 1977.
Carl Sagan says something very important, a strong message that didn't lose any validity since then.
Tariffs xkcd.com/3073
Fantastic initiative by @serge.belongie.com and Søren Hauberg 🇪🇺🇩🇰
Please take a moment to answer through the poll below and share in your networks! 👇
AI is just a hoax by big IKEA to sell more bedrooms.
This remains important today. As industry places huge financial bets on large language models, the research community needs to investigate other approaches that may achieve similar or better results via entirely different methods. end/
The two postdoctoral positions remain open! (the PhD position has been filled) Applications from deep learning researchers or atmospheric scientists are welcome! Ping me if you have any questions!
Great work led by @maximiliandax.bsky.social now out in @nature.com: Neural network deciphers gravitational waves from merging neutron stars in a second - new method could be instrumental in preparing the field for the next generation of observatories: is.mpg.de/news/neural-... #AIforScience 1/2
Now accepted at @tmlr-pub.bsky.social 🥳
Reading the recent literature on "neural samplers" is an odd experience. There's a lot of attention drifting towards it (conceivably prompted by the increased attention on diffusion models and the like), and so many people are trying many things. It's a different way of thinking about things to me.
Old is new. It's funny how so many ideas from what some call "good old-fashioned AI" keep resurfacing! My advice: revisit Russel and Norvig's book with 2025 deep learning.