Very cool PhD project on generative models for dense detection of rare events in Earth Observation ๐๐ฑ
Nicolas has been my supervisor for the last 3 years, highly recommend doing a PhD with him!
@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation ๐ฐ Previously intern @SonyCSL, @Ircam, @Inria ๐ Personal website: https://lebellig.github.io/
Very cool PhD project on generative models for dense detection of rare events in Earth Observation ๐๐ฑ
Nicolas has been my supervisor for the last 3 years, highly recommend doing a PhD with him!
๐ข Fully funded PhD - ๐ Dense Detection of Rare Events in Remote Sensing using Generative Models
Leverage generative models, unsupervised segmentation and explainability techniques to map disasters
w/ @javi-castillo.bsky.social and Flora Weissgerber
Apply โคต๏ธ
recrutement.cnes.fr/fr/annonce/4...
Is it a vscode plugin?
23.01.2026 10:25 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Meta Flow Maps enable scalable reward alignment, Peter Potaptchik et al. (arxiv.org/abs/2601.14430)
This article introduces Meta Flow Maps: a stochastic generalization of consistency models (one-step generation) that allows efficient reward steering at inference time or during fine-tuning.
I'm excited to open the new year by sharing a new perspective paper.
I give a informal outline of MD and how it can interact with Generative AI. Then, I discuss how far the field has come since the seminal contributions, such as Boltzmann Generators, and what is still missing
Should we ban Brian Eno from bandcamp?
15.01.2026 17:08 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0New blog ๐: I reflect on why I worked on what I worked on...
I think a PhD is a very special time. You get to challenge yourself, push your boundaries, and grow. My thoughts go against the current AI/academia narrative online, so I hope you find it interesting.
chaitjo.substack.com/p/phd-thesis...
Yes estimating distance between distributions with single sample sounds irrelevant. I wonder if flow-based artefacts are sufficiently similar across models with the same FID, allowing us to learn the score predictive model. I may try later!
08.01.2026 17:56 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Agree! I wonder if some generation artefacts are signatures that allow to predict the FID score (suppose that they are present in almost all generated images by a given model)
08.01.2026 17:41 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0You may add the real test (or training ๐) dataset if you are into leaderboard chasing
08.01.2026 17:33 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 01. Select many diffusion/flow-matching models
2. Generate 50k images per model
3. Use FID of each set as a label
4. Train a model to predict FID from a single image
Whatโs the probability this actually works, gives a cheap proxy for FID and enable fast generative model prototyping?
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
07.01.2026 17:27 โ ๐ 143 ๐ 34 ๐ฌ 9 ๐ 9๐ We put together with Mike Davies a review of self-supervised learning for inverse problems, covering the main approaches in the literature with a unified notation and analysis.
arxiv.org/abs/2601.03244
Can we train neural networks just with permutations of their initial weights? And then whats the best initialisation distribution ?
10.12.2025 17:21 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0PS: We also recently released a unified codebase for discrete diffusion, check it out!
๐ Thread : x.com/nkalyanv99/...
๐ GitHub: github.com/nkalyanv99/...
๐ Docs: nkalyanv99.github.io/UNI-D2/
๐ โFoundations of Diffusion Models in General State Spaces: A Self-Contained Introductionโ
Huge thanks to Tobias Hoppe, @k-neklyudov.bsky.social,
@alextong.bsky.social, Stefan Bauer and @andreadittadi.bsky.social for their supervision! ๐
arxiv : arxiv.org/abs/2512.05092 ๐งต๐
I use drum mic kits for punchier presentations
08.12.2025 08:53 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Then Iโm counting on the sound engineer to engage
07.12.2025 21:23 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.
03.12.2025 21:32 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Hi @ google, can you provide 100k TPU hours to explore the design space of diffusion bridges for image-to-image translation? x1 vs drift pred, architectures and # params, # dataset, scaling couplings and batch sizes (for minibatch-based couplings). I can run everything in jax in return...
03.12.2025 21:19 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)
Video: youtube.com/live/DXQ7FZA...
Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras
@climateainordics.com is now on youtube! Check out some amazing talks on how to help fight climate change using AI!
youtube.com/@climateaino...
@neuripsconf.bsky.social is two weeks away!
๐ข Stop missing great workshop speakers just because the workshop wasnโt on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...
(also available for @euripsconf.bsky.social)
#NeurIPS #EurIPS
Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well
18.11.2025 20:17 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0Sorry but it's getting pretty when leaving python
18.11.2025 19:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Interpolation between two gaussian distributions on a flat torus (my personal benchmark for new llms)
18.11.2025 18:43 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 1It may not top ImageNet benchmarks, but honestly, that hardly matters... the removal of the VAE component is a huge relief and makes it much easier to apply diffusion models to domain-specific datasets that lack large-scale VAEs.
18.11.2025 17:10 โ ๐ 7 ๐ 0 ๐ฌ 0 ๐ 0"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
10.11.2025 09:09 โ ๐ 24 ๐ 12 ๐ฌ 4 ๐ 2It's a >1B model if you unfold the EMA...
13.11.2025 20:11 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0