two days in, and I’m still waiting for an H100 :)
17.10.2025 11:58 — 👍 1 🔁 0 💬 1 📌 0@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation 🛰 Previously intern @SonyCSL, @Ircam, @Inria 🌎 Personal website: https://lebellig.github.io/
two days in, and I’m still waiting for an H100 :)
17.10.2025 11:58 — 👍 1 🔁 0 💬 1 📌 0Excited to share SamudrACE, the first 3D AI ocean–atm–sea-ice #climate emulator! 🚀 Simulates 800 years in 1 day on 1 GPU, ~100× faster than traditional models, straight from your laptop 👩💻 Collaboration with @ai2.bsky.social and GFDL, advancing #AIforScience with #DeepLearning.
tinyurl.com/Samudrace
I'm already waiting for the next generation of "diffusion transformers features are well-suited for discriminative tasks" but with DiT trained with this representation autoencoders and the loop with be closed
15.10.2025 11:55 — 👍 4 🔁 0 💬 0 📌 0Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)
Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
"How to build a consistency model: Learning flow maps via self-distillation" by @nmboffi.bsky.social et al (arxiv.org/abs/2505.18825)
New method to train flow maps without any pretrained flow matching/diffusion models!
While working on semidiscrete flow matching this summer (➡️ arxiv.org/abs/2509.25519), I kept looking for a video illustrating that the velocity field solving the Benamou-Brenier OT problem is NOT constant w.r.t. time ⏳... so I did it myself, take a look! ott-jax.readthedocs.io/tutorials/th...
09.10.2025 20:09 — 👍 9 🔁 1 💬 0 📌 0🚨Updated: "How far can we go with ImageNet for Text-to-Image generation?"
TL;DR: train a text2image model from scratch on ImageNet only and beat SDXL.
Paper, code, data available! Reproducible science FTW!
🧵👇
📜 arxiv.org/abs/2502.21318
💻 github.com/lucasdegeorg...
💽 huggingface.co/arijitghosh/...
Very excited to share our preprint: Self-Speculative Masked Diffusions
We speed up sampling of masked diffusion models by ~2x by using speculative sampling and a hybrid non-causal / causal transformer
arxiv.org/abs/2510.03929
w/ @vdebortoli.bsky.social, Jiaxin Shi, @arnauddoucet.bsky.social
🚀 After more than a year of work — and many great discussions with curious minds & domain experts — we’re excited to announce the public release of 𝐀𝐩𝐩𝐚, our latent diffusion model for global data assimilation!
Check the repo and the complete wiki!
github.com/montefiore-s...
"Be Tangential to Manifold: Discovering Riemannian Metric for Diffusion Models" Shinnosuke Saito et al. arxiv.org/abs/2510.05509
High-density regions might not be the most interesting areas to visit. Thus, they define a new Riemannian metric for diffusion models relying on the Jacobian of the score
#Distinction 🏆| Charlotte Pelletier, lauréate d'une chaire #IUF, développe des méthodes d’intelligence artificielle appliquées aux séries temporelles d’images satellitaires.
➡️ www.ins2i.cnrs.fr/fr/cnrsinfo/...
🤝 @irisa-lab.bsky.social @cnrs-bretagneloire.bsky.social
Reposting because part of me wants to see EBM make a comeback and hopes flow-based training can help it scale.
07.10.2025 18:30 — 👍 4 🔁 0 💬 0 📌 0Our two phenomenal interns, Alireza Mousavi-Hosseini and Stephen Zhang @syz.bsky.social have been cooking some really cool work with Michal Klein and me over the summer.
Relying on optimal transport couplings (to pick noise and data pairs) should, in principle, be helpful to guide flow matching
🧵
You can learn (condition+time)-dependent weights for classifier-free guidance using reward functions like the CLIP score arxiv.org/abs/2510.00815. I wonder if, for text-to-image models, the temporal evolution of learned weights reveals information about the sizes of objects described in the caption
02.10.2025 09:41 — 👍 3 🔁 0 💬 0 📌 0Vous pouvez soutenir ma proposition à la Cour des Comptes d'examiner les marchés publics de voyagistes, notamment dans l'ESR :
participationcitoyenne.ccomptes.fr/processes/co...
Are you targeting a specific task like regression/classification/generation?
14.09.2025 19:06 — 👍 0 🔁 0 💬 0 📌 0I agree that results may differ on higher-dimensional datasets. Still, I appreciate this line of work which questions the generalization capabilities of flow-based modes by combining mathematical insights with experimental observations on image datasets (not only 2d gaussian mixtures)
13.09.2025 20:21 — 👍 0 🔁 0 💬 0 📌 0Locality in Image Diffusion Models Emerges from Data Statistics, by Artem Lukoianov et al. (arxiv.org/abs/2509.09672), retweet if you want your model to generate samples in the "sampling voids".
12.09.2025 17:28 — 👍 11 🔁 3 💬 1 📌 0#Communiqué 🗞️ La médaille d'or 2025 du CNRS est décernée à Stéphane Mallat, mondialement reconnu pour ses travaux autour des mathématiques appliquées au traitement du signal et à l’intelligence artificielle. 👏
👉 cnrs.fr/fr/presse/en...
#TalentsCNRS 🏅
Grateful for the opportunity to speak at tomorrow’s Learning Machines seminar (RISE+@climateainordics.com) on generative domain adaptation and geospatial foundation models benchmarking for robust Earth observation 🌍
Join on Sept 11 at 15:00 CET! www.ri.se/en/learningm...
☀️ Just wrapped up the DeepInverse Hackathon!
We had 30+ imaging scientists from all over the world coding during 3 days next to the beautiful Calanques in Marseille, France. It was a great moment to meet new people, discuss science, and code new imaging algorithms!
Grateful for the opportunity to speak at tomorrow’s Learning Machines seminar (RISE+@climateainordics.com) on generative domain adaptation and geospatial foundation models benchmarking for robust Earth observation 🌍
Join on Sept 11 at 15:00 CET! www.ri.se/en/learningm...
If you’re interested in joining the visio, shoot me a DM and I’ll send you the link!
( Time Zone is Paris )
One-day workshop on Diffusion models and Flow matching, October 24th at @ensdelyon.bsky.social
Registration and call for contributions (short talk and poster) are open at
gdr-iasis.cnrs.fr/reunions/mod...
Does a smaller latent space lead to worse generation in latent diffusion models? Not necessarily! We show that LDMs are extremely robust to a wide range of compression rates (10-1000x) in the context of physics emulation.
We got lost in latent space. Join us 👇
I am very happy to finally share something I have been working on and off for the past year:
"The Information Dynamics of Generative Diffusion"
This paper connects entropy production, divergence of vector fields and spontaneous symmetry breaking
link: arxiv.org/abs/2508.19897
New paper on arXiv! And I think it's a good'un 😄
Meet the new Lattice Random Walk (LRW) discretisation for SDEs. It’s radically different from traditional methods like Euler-Maruyama (EM) in that each iteration can only move in discrete steps {-δₓ, 0, δₓ}.
Late to the party but I like the fact that you can use geodesic random walk (like really simulating the random walks) to derive the SDEs necessary for diffusion models on Riemannian manifolds (from arxiv.org/abs/2202.02763)
30.08.2025 09:28 — 👍 8 🔁 0 💬 0 📌 0🌍🤖 New Climate AI Nordics newsletter is out! Highlights: Bayesian optimisation survey, member spotlight, events & jobs in AI + climate. 👉 climateainordics.com/newsletter/2...
29.08.2025 13:56 — 👍 3 🔁 3 💬 0 📌 0Hugging Face’s Transformers library has dropped jax support... but if, by any chance, someone builds a great and beautifully written flow matching/diffusion library in jax, I’d seriously consider switching from torch 🤗
26.08.2025 17:20 — 👍 3 🔁 0 💬 0 📌 0