"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.
03.12.2025 21:32 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation ๐ฐ Previously intern @SonyCSL, @Ircam, @Inria ๐ Personal website: https://lebellig.github.io/
"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.
03.12.2025 21:32 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Hi @ google, can you provide 100k TPU hours to explore the design space of diffusion bridges for image-to-image translation? x1 vs drift pred, architectures and # params, # dataset, scaling couplings and batch sizes (for minibatch-based couplings). I can run everything in jax in return...
03.12.2025 21:19 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)
Video: youtube.com/live/DXQ7FZA...
Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras
@climateainordics.com is now on youtube! Check out some amazing talks on how to help fight climate change using AI!
youtube.com/@climateaino...
@neuripsconf.bsky.social is two weeks away!
๐ข Stop missing great workshop speakers just because the workshop wasnโt on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...
(also available for @euripsconf.bsky.social)
#NeurIPS #EurIPS
Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well
18.11.2025 20:17 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0Sorry but it's getting pretty when leaving python
18.11.2025 19:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Interpolation between two gaussian distributions on a flat torus (my personal benchmark for new llms)
18.11.2025 18:43 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 1It may not top ImageNet benchmarks, but honestly, that hardly matters... the removal of the VAE component is a huge relief and makes it much easier to apply diffusion models to domain-specific datasets that lack large-scale VAEs.
18.11.2025 17:10 โ ๐ 7 ๐ 0 ๐ฌ 0 ๐ 0"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
10.11.2025 09:09 โ ๐ 24 ๐ 12 ๐ฌ 4 ๐ 2It's a >1B model if you unfold the EMA...
13.11.2025 20:11 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0We created a 1-hour live-coding tutorial to get started in imaging problems with AI, using the deepinverse library
youtu.be/YRJRgmXV8_I?...
Iโll be at EurIPS in Copenhagen in early December ! Always up for chats about diffusion, flow matching, Earth observation, AI4climate, etc... Ping me if youโre going! ๐ฉ๐ฐ๐
12.11.2025 21:07 โ ๐ 8 ๐ 0 ๐ฌ 0 ๐ 0I first came across the idea of learning curved interpolants in "Branched Schrรถdinger Bridge Matching" arxiv.org/abs/2506.09007. I liked it, but Iโm curious how well it scales to high-dim settings and how challenging it is to learn sufficiently good interpolants to train the diffusion bridge
12.11.2025 20:57 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0"Curly Flow Matching for Learning Non-gradient Field Dynamics" @kpetrovvic.bsky.social et al. arxiv.org/pdf/2510.26645
Solving the Schrรถdinger bridge pb with a non-zero drift ref. process: learn curved interpolants, apply minibatch OT with the induced metric, learn the mixture of diffusion bridges.
I'm on my way to @caltech.edu for an AI + Science conference. Looking forward to seeing some friends and meeting new ones. There will be a livestream.
aiscienceconference.caltech.edu
โEntropic (Gromov) Wasserstein Flow Matching with GENOTโ by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging GromovโWasserstein couplings
๐ฅ DeepInverse is now part of the official PyTorch Landscape๐ฅ
We are excited to join an ecosystem of great open-source AI libraries, including @hf.co diffusers, MONAI, einops, etc.
pytorch.org/blog/deepinv...
๐๐๐New paper on the generation phases of Flow Matching arxiv.org/abs/2510.24830
Are FM & diffusion models nothing else than denoisers at every noise level?
In theory yes, *if trained optimally*. But in practice, do all noise level equally matter?
with @annegnx.bsky.social, S Martin & R Gribonval
Want to work on generative models and Earth Observation? ๐
I'm looking for:
๐งโ๐ป an intern on generative models for change detection
๐งโ๐ฌ a PhD student on neurosymbolic generative models for geospatial data
Both starting beginning of 2026.
Details are below, feel free to email me!
We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.
- 19x faster convergence โก
- 370x less FLOPS than FLUX-dev ๐
Great article! But can the preference score go up to 2? You know, because 1 just isnโt aesthetic enough.
31.10.2025 09:37 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0โEntropic (Gromov) Wasserstein Flow Matching with GENOTโ by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging GromovโWasserstein couplings
New paper, with @rkhashmani.me @marielpettee.bsky.social @garrettmerz.bsky.social Hellen Qu. We introduce a framework for generating realistic, highly multimodal datasets with explicitly calculable mutual information. This is helpful for studying self-supervised learning
arxiv.org/abs/2510.21686
Only because of the font size and the large figures!
Tbh, I think this is a super valuable resource for anyone with even a small amount of background in diffusion models or generative models whoโs interested in diffusion model research (and wants to avoid the early papers' notations)
"The Principles of Diffusion Models" by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon. arxiv.org/abs/2510.21890
It might not be the easiest intro to diffusion models, but this monograph is an amazing deep dive into the math behind them and all the nuances
I'm excited to share jaxion, a differentiable Python/JAX library for fuzzy dark matter (axions) + gas + stars, scalable on multiple GPUs
โญ๏ธrepo: github.com/JaxionProjec...
๐docs: jaxion.readthedocs.io
Feedback + collaborations welcome!
Fisher meets Feynman: score-based variational inference with a product of experts
Fisher meets Feynman! ๐ค
We use score matching and a trick from quantum field theory to make a product-of-experts family both expressive and efficient for variational inference.
To appear as a spotlight @ NeurIPS 2025.
#NeurIPS2025 (link below)
that and please share/repost the articles youโre interested in (especially if youโre not the author). If Iโm following you, I want to see what youโre reading. We donโt need a fancy algorithm if we can discover great research through the curated posts of the people we follow
27.10.2025 13:55 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0