lebellig's Avatar

lebellig

@lebellig.bsky.social

Ph.D. student on generative models and domain adaptation for Earth observation ๐Ÿ›ฐ Previously intern @SonyCSL, @Ircam, @Inria ๐ŸŒŽ Personal website: https://lebellig.github.io/

2,292 Followers  |  672 Following  |  148 Posts  |  Joined: 08.12.2023  |  2.0763

Latest posts by lebellig.bsky.social on Bluesky

Very cool PhD project on generative models for dense detection of rare events in Earth Observation ๐ŸŒ๐ŸŒฑ

Nicolas has been my supervisor for the last 3 years, highly recommend doing a PhD with him!

03.02.2026 14:08 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
26-252 Dense Detection of Rare Events in Remote Sensing Using Generative Models Offre dโ€™emploi 26-252 Dense Detection of Rare Events in Remote Sensing Using Generative Models au CNES ร  75003 Paris !

๐Ÿ“ข Fully funded PhD - ๐ŸŒ Dense Detection of Rare Events in Remote Sensing using Generative Models

Leverage generative models, unsupervised segmentation and explainability techniques to map disasters

w/ @javi-castillo.bsky.social and Flora Weissgerber

Apply โคต๏ธ
recrutement.cnes.fr/fr/annonce/4...

03.02.2026 12:51 โ€” ๐Ÿ‘ 8    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Is it a vscode plugin?

23.01.2026 10:25 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

Meta Flow Maps enable scalable reward alignment, Peter Potaptchik et al. (arxiv.org/abs/2601.14430)

This article introduces Meta Flow Maps: a stochastic generalization of consistency models (one-step generation) that allows efficient reward steering at inference time or during fine-tuning.

22.01.2026 09:15 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

I'm excited to open the new year by sharing a new perspective paper.

I give a informal outline of MD and how it can interact with Generative AI. Then, I discuss how far the field has come since the seminal contributions, such as Boltzmann Generators, and what is still missing

16.01.2026 10:25 โ€” ๐Ÿ‘ 19    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Should we ban Brian Eno from bandcamp?

15.01.2026 17:08 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
A Cambridge PhD thesis in three research questions Geometric Deep Learning for Molecular Modelling and Design: A personal scientific journey

New blog ๐Ÿ’™: I reflect on why I worked on what I worked on...

I think a PhD is a very special time. You get to challenge yourself, push your boundaries, and grow. My thoughts go against the current AI/academia narrative online, so I hope you find it interesting.

chaitjo.substack.com/p/phd-thesis...

08.01.2026 04:38 โ€” ๐Ÿ‘ 10    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Yes estimating distance between distributions with single sample sounds irrelevant. I wonder if flow-based artefacts are sufficiently similar across models with the same FID, allowing us to learn the score predictive model. I may try later!

08.01.2026 17:56 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Agree! I wonder if some generation artefacts are signatures that allow to predict the FID score (suppose that they are present in almost all generated images by a given model)

08.01.2026 17:41 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

You may add the real test (or training ๐Ÿ‘€) dataset if you are into leaderboard chasing

08.01.2026 17:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

1. Select many diffusion/flow-matching models
2. Generate 50k images per model
3. Use FID of each set as a label
4. Train a model to predict FID from a single image

Whatโ€™s the probability this actually works, gives a cheap proxy for FID and enable fast generative model prototyping?

08.01.2026 15:17 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7

07.01.2026 17:27 โ€” ๐Ÿ‘ 143    ๐Ÿ” 34    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 9
Preview
Self-Supervised Learning from Noisy and Incomplete Data Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tac...

๐Ÿ“– We put together with Mike Davies a review of self-supervised learning for inverse problems, covering the main approaches in the literature with a unified notation and analysis.

arxiv.org/abs/2601.03244

08.01.2026 12:37 โ€” ๐Ÿ‘ 9    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Can we train neural networks just with permutations of their initial weights? And then whats the best initialisation distribution ?

10.12.2025 17:21 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

PS: We also recently released a unified codebase for discrete diffusion, check it out!

๐• Thread : x.com/nkalyanv99/...
๐Ÿ”— GitHub: github.com/nkalyanv99/...
๐Ÿ“š Docs: nkalyanv99.github.io/UNI-D2/

09.12.2025 16:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

๐Ÿ†• โ€œFoundations of Diffusion Models in General State Spaces: A Self-Contained Introductionโ€

Huge thanks to Tobias Hoppe, @k-neklyudov.bsky.social,
@alextong.bsky.social, Stefan Bauer and @andreadittadi.bsky.social for their supervision! ๐Ÿ™Œ

arxiv : arxiv.org/abs/2512.05092 ๐Ÿงต๐Ÿ‘‡

09.12.2025 16:05 โ€” ๐Ÿ‘ 11    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

I use drum mic kits for punchier presentations

08.12.2025 08:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Then Iโ€™m counting on the sound engineer to engage

07.12.2025 21:23 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.

03.12.2025 21:32 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hi @ google, can you provide 100k TPU hours to explore the design space of diffusion bridges for image-to-image translation? x1 vs drift pred, architectures and # params, # dataset, scaling couplings and batch sizes (for minibatch-based couplings). I can run everything in jax in return...

03.12.2025 21:19 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)

Video: youtube.com/live/DXQ7FZA...

Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras

27.11.2025 19:14 โ€” ๐Ÿ‘ 28    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@climateainordics.com is now on youtube! Check out some amazing talks on how to help fight climate change using AI!

youtube.com/@climateaino...

26.11.2025 14:06 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

@neuripsconf.bsky.social is two weeks away!

๐Ÿ“ข Stop missing great workshop speakers just because the workshop wasnโ€™t on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...

(also available for @euripsconf.bsky.social)

#NeurIPS #EurIPS

19.11.2025 20:00 โ€” ๐Ÿ‘ 9    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well

18.11.2025 20:17 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Sorry but it's getting pretty when leaving python

18.11.2025 19:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Interpolation between two gaussian distributions on a flat torus (my personal benchmark for new llms)

18.11.2025 18:43 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

It may not top ImageNet benchmarks, but honestly, that hardly matters... the removal of the VAE component is a huge relief and makes it much easier to apply diffusion models to domain-specific datasets that lack large-scale VAEs.

18.11.2025 17:10 โ€” ๐Ÿ‘ 7    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image Post image

"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.

18.11.2025 17:05 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Video thumbnail

We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.

10.11.2025 09:09 โ€” ๐Ÿ‘ 24    ๐Ÿ” 12    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 2

It's a >1B model if you unfold the EMA...

13.11.2025 20:11 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@lebellig is following 20 prominent accounts