Antoine Wehenkel

Antoine Wehenkel

@awehwe.bsky.social

ML + Physics + Health at . Exploring the interaction between scientific and ML models.

621 Followers 119 Following 20 Posts Joined Nov 2024
6 months ago
Post image

Just finished delivering a course on 'Robust and scalable simulation-based inference (SBI)' at Greek Stochastics. This covered an introduction to SBI, open challenges, and some recent contributions from my own group.

The slides are now available here: fxbriol.github.io/pdfs/slides-....

35 9 1 1
8 months ago

Very excited about this 👇
Stay tuned for the call for papers and more info!! ☄️

3 0 0 0
8 months ago

A true francophone!

1 0 1 0
9 months ago
Cover page of the PhD thesis "Reinforcement Learning in Partially Observable Markov Decision Processes: Learning to Remember the Past by Learning to Predict the Future" by Gaspard Lambrechts

Two months after my PhD defense on RL in POMDP, I finally uploaded the final version of my thesis :)

You can find it here: hdl.handle.net/2268/328700 (manuscript and slides).

Many thanks to my advisors and to the jury members.

8 2 0 0
9 months ago
Slide showing three recent successes of reinforcement learning that have used an asymmetric actor-critic algorithm:
 - Magnetic Control of Tokamak Plasma through Deep RL (Degrave et al., 2022).
 - Champion-Level Drone Racing using Deep RL (Kaufmann et al., 2023).
 - A Super-Human Vision-Based RL Agent in Gran Turismo (Vasco et al., 2024).

📝 Our paper "A Theoretical Justification for Asymmetric Actor-Critic Algorithms" was accepted at #ICML!

Never heard of "asymmetric actor-critic" algorithms? Yet, many successful #RL applications use them (see image).

But these algorithms are not fully understood. Below, we provide some insights.

16 7 1 0
9 months ago

Positions remain open! Both PhD and postdoctoral opportunities are available on scientific foundation models. An additional position is also available on AI for regional climate models (jointly with @xavierfettweis.bsky.social). Do not hesitate to apply!

14 8 0 0
9 months ago

Génial! Est ce que ce sera enregistré?

3 0 1 0
9 months ago

Huge thanks to my co-authors 🙏
Juan L. Gamella, Ozan Sener, Jens Behrmann, Guillermo Sapiro, Jörn Jacobsen, Marco Cuturi.

We hope RoPE helps reframe model misspecification as a learning problem that requires real-world data to be solved.

0 0 0 0
9 months ago

This is just the beginning.

We hope RoPE pushes SBI toward:
✅ Embracing real-world constraints
✅ Blending domain knowledge + data
✅ Treating robust inference as a learning problem whose objective must be tight with the downstream application for the result of this inference

1 0 1 0
9 months ago

Here’s what RoPE does:
1️⃣ Uses a small calibration set of real (x, θ) pairs
2️⃣ Learns a correction from simulated to real obs using optimal transport
3️⃣ Enables simulation-based inference you can actually trust

1 0 1 0
9 months ago

That’s what motivated RoPE, our method being presented at ICML 2025 🎉

RoPE reframes misspecification as a posterior inaccuracy problem, not a simulator/data mismatch in contrast to how model misspecification is often defined in the literature.

1 0 1 0
9 months ago

Humans design robust statistics by intuition.
Neural SBI doesn’t — unless we teach it how.

🔑 Insight: To make SBI robust, show it real-world data.
And use labeled data to trust the newly learnt inference pipeline.

2 0 1 0
9 months ago

Why? Because inference method and simulator misspecification are deeply entangled.

🧠 Neural SBI often overfits to quirks in simulators.
🤔 Simpler methods (like ABC with handcrafted stats) often perform better when simulators are slightly wrong.

1 0 1 0
9 months ago

SBI thrives in ideal settings — but what happens when simulators aren’t perfect?

Real-world practitioners always ask:
“But what if the simulator is off?”

I used to think this was an issue related to the simulator and not to SBI. Now I believe this is the central issue with existing SBI algorithms.

1 0 1 0
9 months ago
Preview
Addressing Misspecification in Simulation-based Inference through Data-driven Calibration Driven by steady progress in deep generative modeling, simulation-based inference (SBI) has emerged as the workhorse for inferring the parameters of stochastic simulators. However, recent work has dem...

📣 New paper alert — To be presented at ICML 2025!

arxiv.org/abs/2405.08719

What does it really mean for a simulator to be misspecified, if our goal is to estimate parameters with calibrated uncertainty?

A 🧵on our new method, RoPE, and what it means for real-world SBI ⬇️

19 4 1 0
9 months ago

📢 I am looking for AI post-doc/research/engineer positions in Europe (Paris, London, Zurich, ...) starting 2026. My work revolves around generative modeling and AI for Science, with 4+ publications at top conferences during my PhD. If you are hiring, please reach out! If not, please repost 🔁

20 11 2 1
9 months ago

I can't recommend @francois-rozet.bsky.social enough👇He is both this excellent researcher and coder that any ML team would dream having onboard CC: @danilojrezende.bsky.social @awehwe.bsky.social @bkmi.bsky.social @yann-lecun.bsky.social @johannbrehmer.bsky.social

10 3 0 0
9 months ago

But if my lab can help, please DM me.

10 2 1 0
10 months ago

Andry, Rozet, Lewin, Rochman, Mangeleer, Pirlet, Faulx, Gr\'egoire, Louppe: Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation https://arxiv.org/abs/2504.18720 https://arxiv.org/pdf/2504.18720 https://arxiv.org/html/2504.18720

4 6 1 0
10 months ago
Video thumbnail

This is from The Tonight Show with Johnny Carson aired on May 20th, 1977.

Carl Sagan says something very important, a strong message that didn't lose any validity since then.

14,946 4,083 412 356
11 months ago
Preview
Generative modelling in latent space Latent representations for generative models.

New blog post: let's talk about latents!
sander.ai/2025/04/15/l...

74 18 3 5
11 months ago
Post image

Tariffs xkcd.com/3073

31,362 8,762 258 466
11 months ago

Fantastic initiative by @serge.belongie.com and Søren Hauberg 🇪🇺🇩🇰

Please take a moment to answer through the poll below and share in your networks! 👇

19 5 0 0
11 months ago

AI is just a hoax by big IKEA to sell more bedrooms.

6 1 0 0
11 months ago

This remains important today. As industry places huge financial bets on large language models, the research community needs to investigate other approaches that may achieve similar or better results via entirely different methods. end/

16 4 1 0
1 year ago

The two postdoctoral positions remain open! (the PhD position has been filled) Applications from deep learning researchers or atmospheric scientists are welcome! Ping me if you have any questions!

12 7 0 0
1 year ago
Preview
Neural network deciphers gravitational waves from merging neutron stars in a second | Max Planck Institute for Intelligent Systems Binary neutron star mergers emit gravitational waves followed by light. To fully exploit these observations and avoid missing key signals, speed is crucial. In a study to be published in Nature on Mar...

Great work led by @maximiliandax.bsky.social now out in @nature.com: Neural network deciphers gravitational waves from merging neutron stars in a second - new method could be instrumental in preparing the field for the next generation of observatories: is.mpg.de/news/neural-... #AIforScience 1/2

12 5 1 1
1 year ago

Now accepted at @tmlr-pub.bsky.social 🥳

30 3 2 0
1 year ago

Reading the recent literature on "neural samplers" is an odd experience. There's a lot of attention drifting towards it (conceivably prompted by the increased attention on diffusion models and the like), and so many people are trying many things. It's a different way of thinking about things to me.

14 1 1 0
1 year ago
Post image

Old is new. It's funny how so many ideas from what some call "good old-fashioned AI" keep resurfacing! My advice: revisit Russel and Norvig's book with 2025 deep learning.

13 1 0 0