lebellig's Avatar

lebellig

@lebellig.bsky.social

Ph.D. student on generative models and domain adaptation for Earth observation ๐Ÿ›ฐ Previously intern @SonyCSL, @Ircam, @Inria ๐ŸŒŽ Personal website: https://lebellig.github.io/

2,284 Followers  |  647 Following  |  137 Posts  |  Joined: 08.12.2023  |  2.1869

Latest posts by lebellig.bsky.social on Bluesky

"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.

03.12.2025 21:32 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hi @ google, can you provide 100k TPU hours to explore the design space of diffusion bridges for image-to-image translation? x1 vs drift pred, architectures and # params, # dataset, scaling couplings and batch sizes (for minibatch-based couplings). I can run everything in jax in return...

03.12.2025 21:19 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)

Video: youtube.com/live/DXQ7FZA...

Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras

27.11.2025 19:14 โ€” ๐Ÿ‘ 28    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@climateainordics.com is now on youtube! Check out some amazing talks on how to help fight climate change using AI!

youtube.com/@climateaino...

26.11.2025 14:06 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

@neuripsconf.bsky.social is two weeks away!

๐Ÿ“ข Stop missing great workshop speakers just because the workshop wasnโ€™t on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...

(also available for @euripsconf.bsky.social)

#NeurIPS #EurIPS

19.11.2025 20:00 โ€” ๐Ÿ‘ 9    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well

18.11.2025 20:17 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Sorry but it's getting pretty when leaving python

18.11.2025 19:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Interpolation between two gaussian distributions on a flat torus (my personal benchmark for new llms)

18.11.2025 18:43 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

It may not top ImageNet benchmarks, but honestly, that hardly matters... the removal of the VAE component is a huge relief and makes it much easier to apply diffusion models to domain-specific datasets that lack large-scale VAEs.

18.11.2025 17:10 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image

"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.

18.11.2025 17:05 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Video thumbnail

We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.

10.11.2025 09:09 โ€” ๐Ÿ‘ 24    ๐Ÿ” 12    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 2

It's a >1B model if you unfold the EMA...

13.11.2025 20:11 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
DeepInverse tutorial - computational imaging with AI
YouTube video by DeepInverse DeepInverse tutorial - computational imaging with AI

We created a 1-hour live-coding tutorial to get started in imaging problems with AI, using the deepinverse library

youtu.be/YRJRgmXV8_I?...

13.11.2025 15:24 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Iโ€™ll be at EurIPS in Copenhagen in early December ! Always up for chats about diffusion, flow matching, Earth observation, AI4climate, etc... Ping me if youโ€™re going! ๐Ÿ‡ฉ๐Ÿ‡ฐ๐ŸŒ

12.11.2025 21:07 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I first came across the idea of learning curved interpolants in "Branched Schrรถdinger Bridge Matching" arxiv.org/abs/2506.09007. I liked it, but Iโ€™m curious how well it scales to high-dim settings and how challenging it is to learn sufficiently good interpolants to train the diffusion bridge

12.11.2025 20:57 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image Post image

"Curly Flow Matching for Learning Non-gradient Field Dynamics" @kpetrovvic.bsky.social et al. arxiv.org/pdf/2510.26645
Solving the Schrรถdinger bridge pb with a non-zero drift ref. process: learn curved interpolants, apply minibatch OT with the induced metric, learn the mixture of diffusion bridges.

12.11.2025 20:09 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

I'm on my way to @caltech.edu for an AI + Science conference. Looking forward to seeing some friends and meeting new ones. There will be a livestream.
aiscienceconference.caltech.edu

09.11.2025 20:41 โ€” ๐Ÿ‘ 8    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

โ€œEntropic (Gromov) Wasserstein Flow Matching with GENOTโ€ by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromovโ€“Wasserstein couplings

30.10.2025 22:43 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
DeepInverse Joins the PyTorch Ecosystem: the library for solving imaging inverse problems with deep learning โ€“ PyTorch

๐Ÿ’ฅ DeepInverse is now part of the official PyTorch Landscape๐Ÿ’ฅ

We are excited to join an ecosystem of great open-source AI libraries, including @hf.co diffusers, MONAI, einops, etc.

pytorch.org/blog/deepinv...

05.11.2025 17:31 โ€” ๐Ÿ‘ 10    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

๐ŸŒ€๐ŸŒ€๐ŸŒ€New paper on the generation phases of Flow Matching arxiv.org/abs/2510.24830
Are FM & diffusion models nothing else than denoisers at every noise level?
In theory yes, *if trained optimally*. But in practice, do all noise level equally matter?

with @annegnx.bsky.social, S Martin & R Gribonval

05.11.2025 09:03 โ€” ๐Ÿ‘ 20    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Want to work on generative models and Earth Observation? ๐ŸŒ

I'm looking for:
๐Ÿง‘โ€๐Ÿ’ป an intern on generative models for change detection
๐Ÿง‘โ€๐Ÿ”ฌ a PhD student on neurosymbolic generative models for geospatial data

Both starting beginning of 2026.

Details are below, feel free to email me!

04.11.2025 10:08 โ€” ๐Ÿ‘ 5    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence โšก
- 370x less FLOPS than FLUX-dev ๐Ÿ“‰

31.10.2025 11:24 โ€” ๐Ÿ‘ 60    ๐Ÿ” 14    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 5
Post image

Great article! But can the preference score go up to 2? You know, because 1 just isnโ€™t aesthetic enough.

31.10.2025 09:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

โ€œEntropic (Gromov) Wasserstein Flow Matching with GENOTโ€ by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromovโ€“Wasserstein couplings

30.10.2025 22:43 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

New paper, with @rkhashmani.me @marielpettee.bsky.social @garrettmerz.bsky.social Hellen Qu. We introduce a framework for generating realistic, highly multimodal datasets with explicitly calculable mutual information. This is helpful for studying self-supervised learning
arxiv.org/abs/2510.21686

28.10.2025 17:23 โ€” ๐Ÿ‘ 35    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3

Only because of the font size and the large figures!

Tbh, I think this is a super valuable resource for anyone with even a small amount of background in diffusion models or generative models whoโ€™s interested in diffusion model research (and wants to avoid the early papers' notations)

28.10.2025 14:13 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The Principles of Diffusion Models This monograph presents the core principles that have guided the development of diffusion models, tracing their origins and showing how diverse formulations arise from shared mathematical ideas. Diffu...

"The Principles of Diffusion Models" by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon. arxiv.org/abs/2510.21890
It might not be the easiest intro to diffusion models, but this monograph is an amazing deep dive into the math behind them and all the nuances

28.10.2025 08:35 โ€” ๐Ÿ‘ 36    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Video thumbnail

I'm excited to share jaxion, a differentiable Python/JAX library for fuzzy dark matter (axions) + gas + stars, scalable on multiple GPUs

โญ๏ธrepo: github.com/JaxionProjec...
๐Ÿ“šdocs: jaxion.readthedocs.io

Feedback + collaborations welcome!

27.10.2025 18:10 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Fisher meets Feynman: score-based variational inference with a product of experts

Fisher meets Feynman: score-based variational inference with a product of experts

Fisher meets Feynman! ๐Ÿค

We use score matching and a trick from quantum field theory to make a product-of-experts family both expressive and efficient for variational inference.

To appear as a spotlight @ NeurIPS 2025.
#NeurIPS2025 (link below)

27.10.2025 12:51 โ€” ๐Ÿ‘ 43    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

that and please share/repost the articles youโ€™re interested in (especially if youโ€™re not the author). If Iโ€™m following you, I want to see what youโ€™re reading. We donโ€™t need a fancy algorithm if we can discover great research through the curated posts of the people we follow

27.10.2025 13:55 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@lebellig is following 20 prominent accounts