We dig into this equivalence in our latest preprint with @annegnx.bsky.social ! arxiv.org/abs/2510.24830
30.10.2025 07:24 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0@mathurinmassias.bsky.social
Tenured Researcher @INRIA, Ockham team. Teacher @Polytechnique and @ENSdeLyon Machine Learning, Python and Optimization
We dig into this equivalence in our latest preprint with @annegnx.bsky.social ! arxiv.org/abs/2510.24830
30.10.2025 07:24 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0Strong afternoon session: Sรฉgolรจne Martin on how to go from flow matching to denoisers (and hopefully come back?) and Claire Boyer on how learning rate and working in latent spaces affect diffusion models
24.10.2025 15:03 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 0Followed by Scott Pesme on how to use diffusion/flow matching based MMSE to compute a MAP (and nice examples!), and Thibaut Issenhuth on new ways to learn consistency models
@skate-the-apple.bsky.social
Next is @annegnx.bsky.social presenting our neurips paper on why flow matching generalizes, while it shouldn't!
arxiv.org/abs/2506.03719
Kickstarting our workshop on Flow matching and Diffusion with a talk by Eric Vanden Eijnden on how to optimize learning and sampling in Stochastic Interpolants!
Broadcast available at gdr-iasis.cnrs.fr/reunions/mod...
My paper on Generalized Gradient Norm Clipping & Non-Euclidean (L0, L1)-Smoothness (together with collaborators from EPFL) was accepted as an oral at NeurIPS! We extend the theory for our Scion algorithm to include gradient clipping. Read about it here arxiv.org/abs/2506.01913
19.09.2025 16:48 โ ๐ 14 ๐ 3 ๐ฌ 1 ๐ 0merci David !
19.09.2025 16:34 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0merci !
19.09.2025 16:33 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Our work on the generalization of Flow Matching got an oral at Neurips !
Go see @quentinbertrand.bsky.social present it there :)
๐ฅ Excited to announce the Workshop on the Principles of Generative Models at @euripsconf.bsky.social (the European conference parallel to NeurIPS 2025)
๐ฉ๐ฐ Dec 6โ7, Copenhagen
๐ Deadline for contributions: Oct 17
๐ Website: sites.google.com/view/prigm-e...
Fรฉlicitations Anna !!
09.09.2025 11:36 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Oui, tout sera en anglais !
04.09.2025 12:12 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Oui !
04.09.2025 12:11 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Oui... c'est un compromis avec le fait d'avoir suffisamment de crรฉneaux et de temps de discussions aux posters. Tu peux รฉventuellement arriver un peu aprรจs le dรฉbut, et sinon รงa devrait รชtre accessible ร distance ๐ค
04.09.2025 08:22 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0One-day workshop on Diffusion models and Flow matching, October 24th at @ensdelyon.bsky.social
Registration and call for contributions (short talk and poster) are open at
gdr-iasis.cnrs.fr/reunions/mod...
Super elegant approach !
27.06.2025 09:09 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0on second thoughts I'm not sure I understood. In the classical FM loss you do have to learn this derivative no ? The loss is :
27.06.2025 05:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0I was thinking of this:
27.06.2025 05:42 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0I was thinking of the linear interpolant yes ; I haven't seen papers where other are used
26.06.2025 15:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0it could even be velocity matching, and this time you do learn match the *conditional* velocities
26.06.2025 14:53 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0Thanks for the kind words
26.06.2025 09:11 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Then why does flow matching generalize?? Because it fails!
The inductive bias of the neural network prevents from perfectly learning u* and overfitting.
In particular neural networks fail to learn the velocity field for two particular time values.
See the paper for a finer analysis ๐
We propose to regress directly against the optimal (deterministic) u* and show that it never degrades the performance
On the opposite, removing target stochasticity helps generalizing faster.
Yet flow matching generates new samples!
An hypothesis to explain this paradox is target stochasticity: FM targets the conditional velocity field ie only a stochastic approximation of the full velocity field u*
*We refute this hypothesis*: very early, the approximation almost equals u*
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719
๐คฏ Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?
w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social ๐๐๐
On Saturday Anne will also present some very, very cool work on how to leverage Flow Matching models to obtain sota Plug and Play methods:
PnP-Flow: Plug-and-Play Image Restoration with Flow Matching, poster #150 in poster session 6, Saturday at 3 pm
arxiv.org/abs/2410.02423
It was received quite enthusiastically here so time to share it again!!!
Our #ICLR2025 blog post on Flow Matching was published yesterday : iclr-blogposts.github.io/2025/blog/co...
My PhD student @annegnx.bsky.social will present it tomorrow in ICLR, ๐poster session 4, 3 pm, #549 in Hall 3/2B ๐
The proximal operator generalizes projection in convex optimization. It converts minimisers to fixed points. It is at the core of nonsmooth splitting methods and was first introduced by Jean-Jacques Moreau in 1965. www.numdam.org/article/BSMF...
15.04.2025 05:00 โ ๐ 21 ๐ 3 ๐ฌ 0 ๐ 0๐ข๐ข deepinv v0.3.0 is here, with many new features! ๐ข ๐ข
Our passionate team of contributors keeps shipping more exciting tools!
Deepinverse (deepinv.github.io) is a library for solving imaging inverse problems with deep learning.
I had a blast giving a summer school on generative models at AI Hub Senegal, in particular flow matching, with @quentinbertrand.bsky.social and @remiemonet.bsky.social
Our material is publicly available !!! github.com/QB3/SenHubIA...
ensdelyon.bsky.social